Skip to content

feat(pi): Use ModelRegistry & allow empty auth to support custom models/providers.#1284

Merged
Wirasm merged 6 commits intocoleam00:devfrom
matt2000:patch-pi-local-models
Apr 27, 2026
Merged

feat(pi): Use ModelRegistry & allow empty auth to support custom models/providers.#1284
Wirasm merged 6 commits intocoleam00:devfrom
matt2000:patch-pi-local-models

Conversation

@matt2000
Copy link
Copy Markdown
Contributor

@matt2000 matt2000 commented Apr 18, 2026

Summary

Closes #1096

Describe this PR in 2-5 bullets:

  • Problem: Pi Provider can't find custom models or providers (in the 'pi' sense) that have been configured for Pi.
  • Why it matters: This is necessary to use, e.g., offline model via ollama, etc, or to use any provider or model that is not predefined in PI's static catalog. It also allows you to override some model configuration values.
  • What changed: Removed the use of pi-ai package getModel function, which only checks static catalog, and instead we use the find() method of the ModelRegistry that we have to construct anyway, which will load the user's Pi model configurations.
    This also has the side benefit of removing the lookupPiModel method which was doing some weird backflips to get type casting that ModelRegistry.find already does.
  • What did not change (scope boundary): Anything outside of the pi/provider.ts.
    There was a comment about not hitting ~/.pi, but this was always a lie, because we had to read auth.json. If we want to minimize disk reads by caching the 2 disk reads, I suggest we address that in a follow-up task unless we have actual evidence that this change introduces significant performance burden to existing use cases.

UX Journey

Context

User has Pi configured to use custom models, perhaps fine-tuned models on a major cloud provider, perhaps standard models on a less known but compatible cloud provider, or perhaps a model running locally via ollama, llamacpp, mlx, or similar.

Before

This works fine for Pi, but when Archon is configured to use the Pi provider, with the same model, Archon reports the model as not found.

After

Now this works fine in Archon too.

Label Snapshot

  • Risk: risk: low
  • Size: size: S
  • Scope: adapters
  • Module: providers:community.pi

Change Metadata

  • Change type: bug|feature
  • Primary scope: adapters

Validation Evidence (required)

Commands and result summary:

bun run type-check
bun run lint
bun run format:check
bun run test
# Or all at once:
bun run validate
  • Evidence provided (test/log/trace/screenshot):
Archon % bun run validate
$ bun run check:bundled && bun run type-check && bun run lint --max-warnings 0 && bun run format:check && bun run test
$ bun run scripts/generate-bundled-defaults.ts --check
bundled-defaults.generated.ts is up to date (36 commands, 20 workflows).
$ bun --filter '*' type-check && bun x tsc --noEmit -p scripts/tsconfig.json
@archon/isolation type-check $ bun x tsc --noEmit
└─ Done in 531 ms
@archon/isolation type-check $ bun x tsc --noEmit
└─ Done in 531 ms
@archon/isolation type-check $ bun x tsc --noEmit
└─ Done in 531 ms
@archon/adapters type-check $ bun x tsc --noEmit
└─ Done in 1.99 s
@archon/git type-check $ bun x tsc --noEmit
└─ Done in 447 ms
@archon/providers type-check $ bun x tsc --noEmit
└─ Done in 810 ms
@archon/web type-check $ tsc --noEmit
└─ Done in 2.30 s
@archon/paths type-check $ bun x tsc --noEmit
└─ Done in 378 ms
@archon/workflows type-check $ bun x tsc --noEmit
└─ Done in 1.25 s
@archon/core type-check $ bun x tsc --noEmit
└─ Done in 1.25 s
@archon/cli type-check $ bun x tsc --noEmit
│ src/search_esc.ts(9,40): error TS2339: Property 'promises' does not exist on type 'Promise<{ default: typeof import("fs"); rename: typeof rename; renameSync(oldPath: PathLike, newPath: PathLike): void; truncate: typeof truncate; truncateSync(path: PathLike, len?: number | undefined): void; ... 95 more ...; constants: typeof constants; }>'.
│ src/search_esc.ts(15,30): error TS2345: Argument of type 'Buffer<ArrayBuffer>' is not assignable to parameter of type 'string'.
└─ Exited with code 2
@archon/server type-check $ bun x tsc --noEmit
└─ Done in 2.14 s
  • If any command is intentionally skipped, explain why:
    n/a

Security Impact (required)

  • New permissions/capabilities? (No)
  • New external network calls? (No)
  • Secrets/tokens handling changed? (No)
  • File system access scope changed? (No)
  • If any Yes, describe risk and mitigation:
    n/a

Compatibility / Migration

  • Backward compatible? (Yes)
  • Config/env changes? (No)
  • Database migration needed? (No)
  • If yes, exact upgrade steps:
    n/a

Human Verification (required)

What was personally validated beyond CI:

  • Verified scenarios: Ran a workflow using custom provider and model.
  • Edge cases checked: n/a
  • What was not verified: Custom provider requiring authentication

Side Effects / Blast Radius (required)

  • Affected subsystems/workflows: n/a
  • Potential unintended effects: A command may now try to run without valid credentials because we must allow for the possibility of providers that do not require any credentials.
  • Guardrails/monitoring for early detection: Log message.

Rollback Plan (required)

  • Fast rollback command/path: git revert
  • Feature flags or config toggles (if any): No
  • Observable failure symptoms: n/a

Summary by CodeRabbit

  • Bug Fixes

    • Unmapped Pi providers now log an informational message (with an env-var hint) when credentials are missing instead of throwing; env-mapped providers still fail fast with a login hint. Model-not-found errors now surface clearer diagnostics.
  • Refactor

    • Streamlined per-request model resolution and authentication lifecycle so credential checks and model lookup share the same storage and error handling.
  • Tests

    • Tests updated to use registry-based model lookup, remove the old model mock, and clear logger/registry mocks between tests.
  • Documentation

    • Docs updated to describe local/custom inference paths, supported local backends, model naming, and auth-missing runtime behavior.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 18, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Refactors Pi provider model resolution and auth to use pi-coding-agent's AuthStorage + ModelRegistry.create(authStorage).find(). Credential behavior now throws for env-var-mapped providers but logs info (pi.auth_missing) for unmapped ones. Tests and docs updated to match new resolution, error, and logging behavior.

Changes

Cohort / File(s) Summary
Provider implementation — Pi
packages/providers/src/community/pi/provider.ts
Replaced @mariozechner/pi-ai lookup with runtime pi-coding-agent flow: create AuthStorage, call ModelRegistry.create(authStorage), use modelRegistry.find(provider, modelId); surface models.json load errors in "model not found" messages; change credential handling to throw for env-var-mapped providers and emit pi.auth_missing info for unmapped providers; trim outdated comments.
Tests — Pi provider
packages/providers/src/community/pi/provider.test.ts
Switch test mocks from @mariozechner/pi-ai.getModel to ModelRegistry.create().find behavior; assert ModelRegistry.create() called with AuthStorage; add assertions for pi.auth_storage_init_failed, pi.auth_missing, and pi.model_not_found logs; update cleanup to clear logger and ModelRegistry mocks.
Docs — Getting started (Pi)
packages/docs-web/src/content/docs/getting-started/ai-assistants.md
Document local/custom inference backends (LM Studio, ollama, llamacpp, custom OpenAI-compatible endpoints via ~/.pi/agent/models.json), note pi.auth_missing info emission for missing creds, and clarify model naming and credential expectations for local vs cloud providers.

Sequence Diagram(s)

sequenceDiagram
  participant Client
  participant PiProvider
  participant AuthStorage
  participant ModelRegistry
  participant Model
  participant Logger

  Client->>PiProvider: sendQuery(provider, modelId, ...)
  PiProvider->>AuthStorage: AuthStorage.create(config)
  alt authStorage init fails
    AuthStorage-->>PiProvider: throw Error
    PiProvider->>Logger: error("pi.auth_storage_init_failed", ...)
    PiProvider-->>Client: throw Pi-framed error
  else authStorage created
    PiProvider->>ModelRegistry: ModelRegistry.create(AuthStorage)
    ModelRegistry-->>PiProvider: registry
    PiProvider->>ModelRegistry: registry.find(provider, modelId)
    alt model found
      ModelRegistry-->>PiProvider: Model
      PiProvider->>Model: execute request
      Model-->>PiProvider: response
      PiProvider-->>Client: response
    else model not found
      ModelRegistry-->>PiProvider: undefined (maybe with getError() detail)
      PiProvider->>Logger: error("pi.model_not_found", { provider, modelId, modelsJsonError })
      PiProvider-->>Client: throw "Pi model not found" (includes models.json load error if present)
    end
    alt missing credentials & provider mapped to env var
      PiProvider->>Logger: error("missing credentials", { piProvider }, "env-var table + 'pi /login' hint")
      PiProvider-->>Client: throw missing-credentials error
    else missing credentials & provider unmapped
      PiProvider->>Logger: info("pi.auth_missing", { piProvider }, "env-var table hint")
      PiProvider-->>Model: proceed (allow local/uncredentialed providers)
    end
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

🐰 I hopped from getModel to a tidy registry line,
AuthStorage born once, now every call is fine.
find() seeks models, returns truth or none,
For unmapped providers I whisper info — then run.
Tests tidy their mocks; I nibble a carrot, done.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main change: switching to ModelRegistry and enabling empty auth for custom models/providers support.
Description check ✅ Passed The description includes most required sections (Summary, UX Journey context, Security/Compatibility/Validation/etc.) with sufficient detail on problem, scope, and validation evidence.
Linked Issues check ✅ Passed The PR addresses #1096 by enabling ModelRegistry-based resolution of user-configured custom models and providers (ollama, etc.), allowing credentialless local providers; direct config of base URLs and per-provider model overrides remain out of scope per author.
Out of Scope Changes check ✅ Passed All changes are scoped to Pi provider refactoring (test mocks, provider.ts logic, docs); no unrelated file modifications or scope creep detected beyond the stated objective.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@matt2000 matt2000 changed the title Use ModelRegistry & allow empty auth to support custom models/providers. Pi provider: Use ModelRegistry & allow empty auth to support custom models/providers. Apr 18, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
packages/providers/src/community/pi/provider.ts (1)

89-100: ⚠️ Potential issue | 🟠 Major

ModelRegistry.create() reads from disk — contradicts the "in-memory" invariant documented in comments.

The class-level doc (lines 52–57) and inline comment (lines 199–201) claim that "the server never touches ~/.pi/" and that "ModelRegistry … stay in-memory". However, lines 92–93 call AuthStorage.create() and ModelRegistry.create(authStorage), both of which read from disk (~/.pi/agent/auth.json and ~/.pi/agent/models.json). This breaks the documented invariant on every request.

To preserve the guarantee:

  • Use ModelRegistry.inMemory(authStorage) as the default to keep only the built-in catalog (no disk access), or
  • Update the comments to reflect that disk reads now happen per-request and explain why this is necessary.

Additionally, AuthStorage.create() and ModelRegistry.create() are instantiated inside sendQuery() on every invocation. Consider hoisting to module scope or caching on the provider instance to avoid repeated filesystem access per-request (similar to the cached logger at lines 46–49).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/providers/src/community/pi/provider.ts` around lines 89 - 100, The
code currently calls AuthStorage.create() and ModelRegistry.create(authStorage)
inside sendQuery which performs disk reads and violates the in-memory invariant;
change to use ModelRegistry.inMemory(authStorage) (or otherwise call
ModelRegistry.inMemory() directly) to avoid reading ~/.pi/ per request, and
hoist or cache AuthStorage and ModelRegistry at module scope or as fields on the
provider instance (mirroring the cached logger pattern) so they are created once
instead of on every sendQuery invocation; alternatively, if disk reads are
required, update the class-level and inline comments to state that
AuthStorage.create() and ModelRegistry.create() read from disk and justify
per-request behavior.
packages/providers/src/community/pi/provider.test.ts (1)

77-128: ⚠️ Potential issue | 🔴 Critical

Mock must provide ModelRegistry.create—tests calling sendQuery() will fail at runtime.

Production code (provider.ts line 93) calls ModelRegistry.create(authStorage), but the mock only exposes ModelRegistry: { inMemory: mockModelRegistryInMemory }. Every test from line 195 onward that calls sendQuery() will throw TypeError: ModelRegistry.create is not a function before reaching any assertions.

Additionally, both provider.test.ts and session-resolver.test.ts mock @mariozechner/pi-coding-agent with different partial implementations (violating the guideline to use spyOn() instead and separate test invocations to avoid process-global pollution). The mock should expose create alongside inMemory to match what production actually calls.

Suggested fix
 const mockModelRegistryFind = mock((provider: string, modelId: string) => {
   if (provider === 'nonexistent') return undefined;
   return { id: modelId, provider, name: `${provider}/${modelId}` };
 });
-const mockModelRegistryInMemory = mock(() => ({
-  find: mockModelRegistryFind,
-}));
+const mockModelRegistryFactory = mock(() => ({
+  find: mockModelRegistryFind,
+}));
@@
-  ModelRegistry: { inMemory: mockModelRegistryInMemory },
+  ModelRegistry: {
+    create: mockModelRegistryFactory,
+    inMemory: mockModelRegistryFactory,
+  },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/providers/src/community/pi/provider.test.ts` around lines 77 - 128,
The mock for `@mariozechner/pi-coding-agent` is missing ModelRegistry.create which
production code (provider.ts -> sendQuery()) calls; add a
mockModelRegistryCreate (e.g., const mockModelRegistryCreate =
mock((authStorage: unknown) => mockModelRegistryInMemory(authStorage))) and
export it alongside inMemory in the mock.module call (ModelRegistry: { create:
mockModelRegistryCreate, inMemory: mockModelRegistryInMemory }) so sendQuery()
won’t throw; also replace conflicting full-module mocks in other tests with
spyOn(...) calls or reuse this same module mock to avoid process-global mock
pollution.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/providers/src/community/pi/provider.ts`:
- Around line 137-142: Fix the typo and make the log a structured Pino event:
change "necesssarily" to "necessarily" in the message and move the prose into
the context (e.g., msg) while using a structured event name like
"pi.credentials_missing_warn" as the second argument to
getLog().info/getLog().warn; additionally, only suppress throwing when
parsed.provider is not in PI_PROVIDER_ENV_VARS (i.e., allow local/custom
providers to proceed) and otherwise log at warn (or keep throwing) so built-in
providers (parsed.provider checked against PI_PROVIDER_ENV_VARS) fail fast —
include envHint and loginHint in the context object when calling
getLog().warn/getLog().info.

---

Outside diff comments:
In `@packages/providers/src/community/pi/provider.test.ts`:
- Around line 77-128: The mock for `@mariozechner/pi-coding-agent` is missing
ModelRegistry.create which production code (provider.ts -> sendQuery()) calls;
add a mockModelRegistryCreate (e.g., const mockModelRegistryCreate =
mock((authStorage: unknown) => mockModelRegistryInMemory(authStorage))) and
export it alongside inMemory in the mock.module call (ModelRegistry: { create:
mockModelRegistryCreate, inMemory: mockModelRegistryInMemory }) so sendQuery()
won’t throw; also replace conflicting full-module mocks in other tests with
spyOn(...) calls or reuse this same module mock to avoid process-global mock
pollution.

In `@packages/providers/src/community/pi/provider.ts`:
- Around line 89-100: The code currently calls AuthStorage.create() and
ModelRegistry.create(authStorage) inside sendQuery which performs disk reads and
violates the in-memory invariant; change to use
ModelRegistry.inMemory(authStorage) (or otherwise call ModelRegistry.inMemory()
directly) to avoid reading ~/.pi/ per request, and hoist or cache AuthStorage
and ModelRegistry at module scope or as fields on the provider instance
(mirroring the cached logger pattern) so they are created once instead of on
every sendQuery invocation; alternatively, if disk reads are required, update
the class-level and inline comments to state that AuthStorage.create() and
ModelRegistry.create() read from disk and justify per-request behavior.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: b27a4546-e838-4d81-84c7-f662007d9090

📥 Commits

Reviewing files that changed from the base of the PR and between d89bc76 and 0f56f8f.

📒 Files selected for processing (2)
  • packages/providers/src/community/pi/provider.test.ts
  • packages/providers/src/community/pi/provider.ts

Comment thread packages/providers/src/community/pi/provider.ts Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/providers/src/community/pi/provider.test.ts`:
- Around line 238-249: The test currently only checks that a hint is logged but
doesn't assert that the no-auth custom-provider path doesn't throw; update the
test for 'logs credential hint when Pi provider id is unknown AND no creds
available' to explicitly assert the call to PiProvider.sendQuery (via consume)
resolves without throwing (e.g., expect(...).resolves.not.toThrow or await
expect(consume(...)).resolves.toBeUndefined) and adjust the comment from
"fail-fast" to reflect that it should not throw when hinting about env-var
table; keep references to PiProvider.sendQuery and the test name so the change
is applied to the right test.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 5647bf79-a0d0-4899-9490-bb66018b2da7

📥 Commits

Reviewing files that changed from the base of the PR and between dc34e59 and 3f55a97.

📒 Files selected for processing (2)
  • packages/providers/src/community/pi/provider.test.ts
  • packages/providers/src/community/pi/provider.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/providers/src/community/pi/provider.ts

Comment thread packages/providers/src/community/pi/provider.test.ts
@matt2000
Copy link
Copy Markdown
Contributor Author

@coderabbitai resume

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 20, 2026

✅ Actions performed

Reviews resumed.

@Wirasm
Copy link
Copy Markdown
Collaborator

Wirasm commented Apr 20, 2026

@matt2000 related to #1096 — overlapping area or partial fix.

@matt2000 matt2000 changed the title Pi provider: Use ModelRegistry & allow empty auth to support custom models/providers. feat(pi): Use ModelRegistry & allow empty auth to support custom models/providers. Apr 21, 2026
@b1skit
Copy link
Copy Markdown

b1skit commented Apr 22, 2026

@matt2000 @Wirasm Thanks for the fix here. I'm a new user, I decided to try out Archon when I saw the recent Pi announcement on YouTube and hit this exact problem while trying to use a custom Lemonade-backed model through Archon’s Pi provider.

From my testing, the failure was happening during Pi model resolution, so switching from the static catalog lookup to ModelRegistry looks like the right fix and should unblock custom/local providers and models.

If there are no remaining concerns, I’d appreciate getting this merged! :)

@b1skit
Copy link
Copy Markdown

b1skit commented Apr 23, 2026

@coleam00 @matt2000 @Wirasm Apologies for the spam, mind submitting this to unblock those of us using local models? 🙏

@loopyd
Copy link
Copy Markdown

loopyd commented Apr 24, 2026

We need this in for local models, please. Also same issue. This works perfectly with qwen3.6 running thru llama-cpp-local provider or other custom providers in ~/.pi/agent/models.json. Thank you for your contribution! Please keep pinging the maintainers, this one is important.

@stalsma
Copy link
Copy Markdown

stalsma commented Apr 24, 2026

Chiming in here. Also so the YouTube intro, and thought I would try it out. Using LM Studio via pi.

Pi can see and connect to the models w/o issue.

{
  "providers": {
    "lm-studio": {
      "baseUrl": "http://localhost:1234/v1",
      "api": "openai-completions",
      "apiKey": "lm-studio",
      "compat": {
        "supportsDeveloperRole": false,
        "supportsReasoningEffort": false
      },
      "models": [
        {
          "id": "qwen/qwen2.5-coder-14b",
          "name": "Qwen 2.5 Coder 14B",
          "contextWindow": 131072,
          "maxTokens": 8192
        },
        {
          "id": "apriel-1.5-15b-thinker",
          "name": "Apriel 1.5 15B Thinker",
          "reasoning": true,
          "contextWindow": 131072,
          "maxTokens": 8192
        }
      ]
    }
  }
}

But when I configure archon's config.yaml:

# Bot display name (shown in messages)
botName: Archon

# Default AI assistant (must match a registered provider, e.g. claude, codex)
defaultAssistant: pi

#Assistant defaults
assistants:
    pi:
      model: lm-studio/apriel-1.5-15b-thinker

and then run archon-assist command

description: |
  General assistance workflow using Pi instead of Claude.

nodes:
  - id: assist
    provider: pi
    model: lm-studio/apriel-1.5-15b-thinker 
    command: archon-assist

It eventually errors out:

INFO [2026-04-24 18:09:38.506 +0200]: dag_workflow_starting
module: "workflow.dag-executor"
workflowName: "archon-assist"
nodeCount: 1
layerCount: 1
hasIssueContext: false
issueContextLength: 0
[archon-assist] Started
INFO [2026-04-24 18:09:38.509 +0200]: dag_node_started
module: "workflow.dag-executor"
nodeId: "assist"
provider: "pi"
ERROR [2026-04-24 18:09:38.922 +0200]: dag_node_failed
module: "workflow.dag-executor"
nodeId: "assist"
err: {
"type": "Error",
"message": "Pi model not found: provider='lm-studio' model='apriel-1.5-15b-thinker'. See https://github.com/badlogic/pi-mono/blob/main/packages/ai/src/models.generated.ts for the Pi model catalog.",
"stack":
Error: Pi model not found: provider='lm-studio' model='apriel-1.5-15b-thinker'. See https://github.com/badlogic/pi-mono/blob/main/packages/ai/src/models.generated.ts for the Pi model catalog.
at sendQuery (C:\Users\xxxxx\Archon\packages\providers\src\community\pi\provider.ts:235:17)
at processTicksAndRejections (native:7:39)
}
[archon-assist] Failed: Pi model not found: provider='lm-studio' model='apriel-1.5-15b-thinker'. See https://github.com/badlogic/pi-mono/blob/main/packages/ai/src/models.generated.ts for the Pi model catalog.
WARN [2026-04-24 18:09:38.926 +0200]: dag_layer_had_failures
module: "workflow.dag-executor"
layerIdx: 0
nodeCount: 1
INFO [2026-04-24 18:09:38.927 +0200]: dag_workflow_finished

@b1skit
Copy link
Copy Markdown

b1skit commented Apr 24, 2026

@coleam00 @matt2000 @Wirasm mind taking a look at this? Would unblock a lot of people using local models. Happy to test and report back if that would be helpful

@matt2000 matt2000 force-pushed the patch-pi-local-models branch from a8cc61a to 86e1c97 Compare April 24, 2026 19:25
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/providers/src/community/pi/provider.ts (1)

107-114: ⚠️ Potential issue | 🔴 Critical

Fix unbound-method error on line 225: call modelRegistry.find() directly.

Line 225 passes modelRegistry.find as a function without its receiver context, which ESLint flags as @typescript-eslint/unbound-method. More critically, ModelRegistry.find is a prototype method that reads this (the internal registry state). Extracting it without binding breaks the method: it will either throw Cannot read properties of undefined or silently search an empty/wrong store at runtime.

The wrapper lookupPiModel + GetModelFn was originally a workaround for Pi's getModel generic constraints. Since ModelRegistry.find already accepts runtime strings directly, the indirection is now just dead code hiding the bug:

🛠 Proposed fix
-/**
- * Typed wrapper around Pi's `getModel` for a runtime-string provider/model
- * pair. Pi's getModel signature constrains `TModelId` to
- * `keyof MODELS[TProvider]`, which isn't knowable from a runtime string —
- * the local `GetModelFn` alias is the narrowest shape that still lets us
- * bypass that constraint. Isolating the escape hatch behind one searchable
- * name keeps it auditable. Takes `getModel` as a parameter because the Pi
- * SDK is loaded dynamically (see the header comment on this file for why).
- */
-type GetModelFn = (provider: string, modelId: string) => Model<Api> | undefined;
-function lookupPiModel(
-  getModel: GetModelFn,
-  provider: string,
-  modelId: string
-): Model<Api> | undefined {
-  return getModel(provider, modelId);
-}
@@
     const authStorage = piCodingAgent.AuthStorage.create();
     const modelRegistry = piCodingAgent.ModelRegistry.create(authStorage);
-    // 2. Look up the Model via Pi's Registry. `lookupPiModel` returns
-    //    undefined when not found; we guard explicitly below.
-    // Cast to the runtime-string-friendly shape — see `lookupPiModel`'s docblock.
-    const model = lookupPiModel(modelRegistry.find as GetModelFn, parsed.provider, parsed.modelId);
+    // 2. Look up the Model via Pi's Registry. `find` returns undefined when
+    //    not found; we guard explicitly below. Call as a method to preserve
+    //    `this` binding to the registry's internal state.
+    const model = modelRegistry.find(parsed.provider, parsed.modelId);

Also remove the now-unused Api and Model type imports from line 6.

Unit tests at line 308 already cover the unhappy path (undefined return), so the direct call slots in without additional test work.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/providers/src/community/pi/provider.ts` around lines 107 - 114, The
wrapper lookupPiModel and its GetModelFn indirection are causing an
unbound-method error when passing ModelRegistry.find as a callback; replace uses
of lookupPiModel/getModel with a direct call to modelRegistry.find(...) so the
method retains its receiver context, and remove the now-unused lookupPiModel and
GetModelFn definitions plus the unused Api and Model type imports referenced at
the top of the file.
🧹 Nitpick comments (1)
packages/providers/src/community/pi/provider.ts (1)

233-252: Stale step-3 heading — AuthStorage.create() moved up to line 220.

The prose still reads "3. Build AuthStorage. AuthStorage.create() reads ~/.pi/agent/auth.json …", but the actual construction now happens in step 2 (line 220) alongside the registry. Readers following the numbered steps will hunt for a build that isn't here. The resolution-order detail is still valuable — just relabel and trim:

📝 Suggested rewording
-    // 3. Build AuthStorage. `AuthStorage.create()` reads ~/.pi/agent/auth.json
-    //    (or $PI_CODING_AGENT_DIR/auth.json), so any credential the user has
-    //    populated via `pi` → `/login` (OAuth subscriptions: Claude Pro/Max,
+    // 3. Credential resolution. The `AuthStorage` built above reads
+    //    ~/.pi/agent/auth.json (or $PI_CODING_AGENT_DIR/auth.json), so any
+    //    credential the user populated via `pi` → `/login` (OAuth
+    //    subscriptions: Claude Pro/Max,
     //    ChatGPT Plus, GitHub Copilot, Gemini CLI, Antigravity) or by editing
     //    the file directly (api_key entries) is picked up transparently.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/providers/src/community/pi/provider.ts` around lines 233 - 252,
Update the stale "3. Build AuthStorage" heading and surrounding prose to reflect
that AuthStorage.create() is now constructed earlier (at the registry step
around where AuthStorage.create() is called), e.g., remove or renumber the "3."
label and change wording to indicate this is an explanatory note about
AuthStorage resolution rather than a build step; keep the Pi internal resolution
order list and the OAuth refresh note but trim any lines that claim the build
occurs here and instead reference AuthStorage.create() where it actually runs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/providers/src/community/pi/provider.ts`:
- Around line 343-348: The comment block in provider.ts is broken: the
parenthetical about piConfig.enableExtensions interrupts the sentence so the
fragment "discovery except for explicitly-passed skill paths." is orphaned;
update the comment so it is a single grammatical sentence (or two clear
sentences) that preserves the intended meaning. Specifically, in the comment
near the Resource loader/enableExtensions text (reference
piConfig.enableExtensions and the description of the community extension
ecosystem), rephrase to either move the parenthetical so the main clause reads
"...Resource loader still suppresses filesystem discovery except for
explicitly-passed skill paths." and then append a separate sentence describing
Pi's community extension ecosystem when piConfig.enableExtensions is true, or
combine them into one coherent sentence that includes both the suppression of
filesystem discovery and the exception for extensions. Ensure the comment
mentions piConfig.enableExtensions, the extension paths
(~/.pi/agent/extensions/) and `pi install npm:<pkg>` exactly as before.

---

Outside diff comments:
In `@packages/providers/src/community/pi/provider.ts`:
- Around line 107-114: The wrapper lookupPiModel and its GetModelFn indirection
are causing an unbound-method error when passing ModelRegistry.find as a
callback; replace uses of lookupPiModel/getModel with a direct call to
modelRegistry.find(...) so the method retains its receiver context, and remove
the now-unused lookupPiModel and GetModelFn definitions plus the unused Api and
Model type imports referenced at the top of the file.

---

Nitpick comments:
In `@packages/providers/src/community/pi/provider.ts`:
- Around line 233-252: Update the stale "3. Build AuthStorage" heading and
surrounding prose to reflect that AuthStorage.create() is now constructed
earlier (at the registry step around where AuthStorage.create() is called),
e.g., remove or renumber the "3." label and change wording to indicate this is
an explanatory note about AuthStorage resolution rather than a build step; keep
the Pi internal resolution order list and the OAuth refresh note but trim any
lines that claim the build occurs here and instead reference
AuthStorage.create() where it actually runs.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 71822697-8d45-4646-997e-f9709e779b17

📥 Commits

Reviewing files that changed from the base of the PR and between a8cc61a and 86e1c97.

📒 Files selected for processing (2)
  • packages/providers/src/community/pi/provider.test.ts
  • packages/providers/src/community/pi/provider.ts

Comment thread packages/providers/src/community/pi/provider.ts Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
packages/providers/src/community/pi/provider.ts (1)

213-232: Stale step-numbering / narrative in the AuthStorage comment block.

This block opens with "3. Build AuthStorage. AuthStorage.create() reads ~/.pi/agent/auth.json…", but authStorage is now constructed at line 201 (before the model lookup), so the "3." label and the "Build AuthStorage" framing no longer match the code below it — what actually starts here is the runtime-override step, not the construction. Consider retitling (e.g., "3. Apply per-request API key override on the existing AuthStorage…") and moving the auth-file/OAuth explainer up next to line 201 where the instance is actually created.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/providers/src/community/pi/provider.ts` around lines 213 - 232,
Update the comment to reflect actual ordering: clarify that authStorage is
already constructed earlier via AuthStorage.create() and this section is
applying a per-request runtime override using setRuntimeApiKey; retitle the
header from "3. Build AuthStorage" to something like "3. Apply per-request API
key override to existing AuthStorage" and move or copy the detailed explanation
about AuthStorage.create(), ~/.pi/agent/auth.json, OAuth refresh and resolution
order up next to where authStorage is instantiated (AuthStorage.create()) so the
narrative matches the code flow around authStorage and setRuntimeApiKey.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@packages/providers/src/community/pi/provider.ts`:
- Around line 213-232: Update the comment to reflect actual ordering: clarify
that authStorage is already constructed earlier via AuthStorage.create() and
this section is applying a per-request runtime override using setRuntimeApiKey;
retitle the header from "3. Build AuthStorage" to something like "3. Apply
per-request API key override to existing AuthStorage" and move or copy the
detailed explanation about AuthStorage.create(), ~/.pi/agent/auth.json, OAuth
refresh and resolution order up next to where authStorage is instantiated
(AuthStorage.create()) so the narrative matches the code flow around authStorage
and setRuntimeApiKey.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: c50e0d55-a425-4fee-af4d-6d60dd712157

📥 Commits

Reviewing files that changed from the base of the PR and between 86e1c97 and bcaf70d.

📒 Files selected for processing (1)
  • packages/providers/src/community/pi/provider.ts

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
packages/providers/src/community/pi/provider.ts (1)

254-258: ⚠️ Potential issue | 🟡 Minor

Log msg argument still isn't a structured event name.

The typo and the env-mapped vs. unmapped branch split were addressed, but Pino's second positional arg here is still a prose sentence rather than a {domain}.{action}_{state} event. Every other log in this file (pi.config_env_applied, pi.session_started, pi.prompt_completed, pi.prompt_failed) follows that format, so log queries can't group on this one. Move the prose into the merging object (or a msg field) and use something like pi.credentials_missing as the event name.

✏️ Suggested edit
-      const envHint = `Provider '${parsed.provider}' is not in the Archon adapter's env-var table — file an issue if you want a shortcut env var for it.`;
-      const loginHint = `Or run \`pi\` and type \`/login\` locally to authenticate '${parsed.provider}' via OAuth; credentials land in ~/.pi/agent/auth.json and are picked up automatically.`;
-      // No credentials for unmapped providers is not necessarily a problem, e.g., local models.
-      getLog().info(
-        { piProvider: parsed.provider },
-        `No Pi credentials found for provider. ${envHint} ${loginHint}`
-      );
+      const envHint = `Provider '${parsed.provider}' is not in the Archon adapter's env-var table — file an issue if you want a shortcut env var for it.`;
+      const loginHint = `Or run \`pi\` and type \`/login\` locally to authenticate '${parsed.provider}' via OAuth; credentials land in ~/.pi/agent/auth.json and are picked up automatically.`;
+      // No credentials for unmapped providers is not necessarily a problem, e.g., local models.
+      getLog().info(
+        {
+          piProvider: parsed.provider,
+          envHint,
+          loginHint,
+          detail: 'No Pi credentials found for provider',
+        },
+        'pi.credentials_missing'
+      );

As per coding guidelines: "event naming format: {domain}.{action}_{state}; avoid generic events like 'processing' or 'handling'".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/providers/src/community/pi/provider.ts` around lines 254 - 258, The
log call using getLog().info currently passes a prose string as the second
positional arg; change it to use a structured event name (e.g.,
"pi.credentials_missing") as the second arg and move the human-readable sentence
into the merged object (for example a msg or message field) alongside
parsed.provider, envHint and loginHint so the log becomes a structured event
consistent with other events like pi.config_env_applied and pi.session_started;
update the getLog().info invocation where parsed.provider is referenced to
include these fields in the object and use "pi.credentials_missing" as the event
name.
🧹 Nitpick comments (1)
packages/providers/src/community/pi/provider.ts (1)

201-217: Step-numbered narration is now out of order with code flow.

With AuthStorage.create() hoisted above the model lookup, the numbered comments no longer match the execution order: step "3. Build AuthStorage" (line 213) narrates code that actually ran before step "2. Look up the Model" (line 203). Since the block at 213-232 is now purely explanatory (the construction already happened at line 201), consider either renumbering or moving the doc block up to the AuthStorage.create() call so readers following the numbered steps don't get disoriented.

♻️ One way to reorganize
-    const authStorage = piCodingAgent.AuthStorage.create();
-    const modelRegistry = piCodingAgent.ModelRegistry.create(authStorage);
-    // 2. Look up the Model via Pi's Registry. `modelRegistry.find` returns
-    //    undefined when not found; we guard explicitly below.
+    // 2. Build AuthStorage + ModelRegistry. `AuthStorage.create()` reads
+    //    ~/.pi/agent/auth.json (or $PI_CODING_AGENT_DIR/auth.json), so
+    //    credentials populated via `pi` → `/login` (OAuth: Claude Pro/Max,
+    //    ChatGPT Plus, Copilot, Gemini CLI, Antigravity) or edited directly
+    //    (api_key entries) are picked up transparently.
+    //
+    //    Pi's internal resolution order:
+    //      1. runtime override  (setRuntimeApiKey below)
+    //      2. auth.json api_key entry
+    //      3. auth.json oauth entry  (auto-refreshes expired tokens)
+    //      4. env var fallback  (Pi's getEnvApiKey, e.g. ANTHROPIC_API_KEY)
+    const authStorage = piCodingAgent.AuthStorage.create();
+    const modelRegistry = piCodingAgent.ModelRegistry.create(authStorage);
+
+    // 3. Look up the Model via Pi's Registry. `modelRegistry.find` returns
+    //    undefined when not found; we guard explicitly below.
     const model = modelRegistry.find(parsed.provider, parsed.modelId);
     if (!model) {
       throw new Error(
         `Pi model not found: provider='${parsed.provider}' model='${parsed.modelId}'. ` +
           'See https://github.com/badlogic/pi-mono/blob/main/packages/ai/src/models.generated.ts for the Pi model catalog.'
       );
     }
 
-    // 3. Build AuthStorage. `AuthStorage.create()`, above, read ~/.pi/agent/auth.json
-    //    (or $PI_CODING_AGENT_DIR/auth.json), so any credentials the user has
-    //    populated via `pi` → `/login` (OAuth subscriptions: Claude Pro/Max,
-    //    ChatGPT Plus, GitHub Copilot, Gemini CLI, Antigravity) or by editing
-    //    the file directly (api_key entries) are picked up transparently.
-    //
-    //    Per-request env vars override the file via setRuntimeApiKey — this
-    //    mirrors Claude's process-env + request-env merge pattern and
-    //    ensures codebase-scoped env vars (from .archon/config.yaml `env:`)
-    //    win over the user's global Pi login.
-    //
-    //    Pi's internal resolution order:
-    //      1. runtime override  (our setRuntimeApiKey below)
-    //      2. auth.json api_key entry
-    //      3. auth.json oauth entry  (auto-refreshes expired tokens)
-    //      4. env var fallback  (Pi's getEnvApiKey, e.g. ANTHROPIC_API_KEY)
-    //
-    //    OAuth refresh note: Pi refreshes expired access tokens against the
-    //    provider's OAuth server and rewrites ~/.pi/agent/auth.json under a
-    //    file lock (same mechanism pi CLI uses — safe for concurrent access).
+    // 4. Per-request env vars override the file via setRuntimeApiKey — this
+    //    mirrors Claude's process-env + request-env merge pattern and
+    //    ensures codebase-scoped env vars (from .archon/config.yaml `env:`)
+    //    win over the user's global Pi login. Pi refreshes expired OAuth
+    //    tokens under a file lock (same mechanism as the pi CLI).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/providers/src/community/pi/provider.ts` around lines 201 - 217, The
numbered explanatory comments are out of order: AuthStorage.create() is called
before the "2. Look up the Model" block but the narrative labels step 3 as
building AuthStorage; fix by either moving the paragraph that begins "3. Build
AuthStorage..." up to directly above the AuthStorage.create() call (so the steps
read 1→2→3 in execution order) or renumber the steps so the narration matches
the actual flow (e.g., make AuthStorage the earlier step and adjust the "2. Look
up the Model" text); update references to AuthStorage.create(),
modelRegistry.find, and the model/parsed variables to ensure the comment sits
adjacent to the code it describes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@packages/providers/src/community/pi/provider.ts`:
- Around line 254-258: The log call using getLog().info currently passes a prose
string as the second positional arg; change it to use a structured event name
(e.g., "pi.credentials_missing") as the second arg and move the human-readable
sentence into the merged object (for example a msg or message field) alongside
parsed.provider, envHint and loginHint so the log becomes a structured event
consistent with other events like pi.config_env_applied and pi.session_started;
update the getLog().info invocation where parsed.provider is referenced to
include these fields in the object and use "pi.credentials_missing" as the event
name.

---

Nitpick comments:
In `@packages/providers/src/community/pi/provider.ts`:
- Around line 201-217: The numbered explanatory comments are out of order:
AuthStorage.create() is called before the "2. Look up the Model" block but the
narrative labels step 3 as building AuthStorage; fix by either moving the
paragraph that begins "3. Build AuthStorage..." up to directly above the
AuthStorage.create() call (so the steps read 1→2→3 in execution order) or
renumber the steps so the narration matches the actual flow (e.g., make
AuthStorage the earlier step and adjust the "2. Look up the Model" text); update
references to AuthStorage.create(), modelRegistry.find, and the model/parsed
variables to ensure the comment sits adjacent to the code it describes.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a2e35f61-65bb-4f3d-bc7e-06995de31776

📥 Commits

Reviewing files that changed from the base of the PR and between bcaf70d and bcaac3d.

📒 Files selected for processing (1)
  • packages/providers/src/community/pi/provider.ts

@loopyd
Copy link
Copy Markdown

loopyd commented Apr 24, 2026

@matt2000 hard-coded model names in Archon default workflows/commands will cause node SDK Error crashes when using local LLMs, I ran into that this morning when trying to run idea-to-pr workflow and ended up having to copy them to my project folder's .archon location for modification To use default Archon workflows provided by the project, it requires heavy customization of both workflows and commands to move away from hard-coded Claude Code/Codex model values and from those that generate titles as well (trying to use haiku by default) Unfortunately the default template-generator.ts isn't smart enough to regenerate the defaults with values that work for local LLMs. Consider a path to demangle this edge case for packed "user friendly" workflows/commands to keep the out of the box user experience robust for local users:

A. Docs updates that let local LLM users know forking default template-baked commands/workflows is nessecary to fix provider: and model: references as well as update command: and {runner} pointers to refer to custom commands run from workflows -- a regression to UX but will keep default workflows untouched and this PR simple. Consider in this solution path that local LLM users will need to go back to spec engineering for their specific model in most cases to fix tool calls and provide model-specific token directives, and will probably need to fork defaults anyway...

B. Improve actual template regeneration and catch SDK Error exceptions to call regeneration of a template/retry on SDK Error fail with user's model settings injected (baked failsafe execution path for local LLM users), where the default provided templates regenerate with properly updated values to avoid node SDK errors when user is using local LLMs -- will require more complex modifications and may inflate the size/scope of this PR, but will commit the outstanding issue to code in the case of SDK Error when using empty auth/local.

Unfortunately there more upstream to this one than meet the eye. >.<` Good work, though.

@matt2000
Copy link
Copy Markdown
Contributor Author

hard-coded model names in Archon default workflows/commands will cause node SDK Error crashes when using local LLMs

I don't really use default workflows much, so this isnt a priority for me. But I think the real fix is that Archon should have configurable "model aliases" that are interchangable with model IDs, like "default", "small," "tooluser," etc, that can default to anthropic models or whatever Cole likes, but be overridden in user config. All the default workflows should only use aliases, and never "concrete" model names.

@matt2000 matt2000 force-pushed the patch-pi-local-models branch from 88c3a30 to f3d1657 Compare April 24, 2026 21:26
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
packages/providers/src/community/pi/provider.ts (1)

213-233: Step numbering in the comment block is now misaligned.

This block is titled "3. Build AuthStorage" and opens with "AuthStorage.create(), above, read …", but AuthStorage is now constructed at line 201 (before the model lookup labeled "2."). The step headers read a bit confusingly — consider renumbering so the comment reflects the new ordering (e.g., fold the AuthStorage construction into step 2, or relabel this block "Resolve credentials" without a step number). Purely editorial; no behavior impact.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/providers/src/community/pi/provider.ts` around lines 213 - 233, The
comment block heading "3. Build AuthStorage" is out of order relative to where
AuthStorage is actually constructed (AuthStorage.create() / AuthStorage
constructor appears earlier near the model lookup labeled "2."); update the
comment to reflect the actual flow by either renumbering the step to match the
current ordering, folding the AuthStorage construction discussion into the "2."
section, or replace the step number with a neutral label like "Resolve
credentials" while keeping references to AuthStorage.create() and
setRuntimeApiKey so readers can locate the relevant code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@packages/providers/src/community/pi/provider.ts`:
- Around line 213-233: The comment block heading "3. Build AuthStorage" is out
of order relative to where AuthStorage is actually constructed
(AuthStorage.create() / AuthStorage constructor appears earlier near the model
lookup labeled "2."); update the comment to reflect the actual flow by either
renumbering the step to match the current ordering, folding the AuthStorage
construction discussion into the "2." section, or replace the step number with a
neutral label like "Resolve credentials" while keeping references to
AuthStorage.create() and setRuntimeApiKey so readers can locate the relevant
code.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 08391f05-9e64-4d0a-8a2d-85fd2d551bf3

📥 Commits

Reviewing files that changed from the base of the PR and between bcaac3d and f3d1657.

📒 Files selected for processing (2)
  • packages/providers/src/community/pi/provider.test.ts
  • packages/providers/src/community/pi/provider.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/providers/src/community/pi/provider.test.ts

@loopyd
Copy link
Copy Markdown

loopyd commented Apr 24, 2026

All the default workflows should only use aliases, and never "concrete" model names.

This makes sense, as I had indicated most local LLM users will need to procure their own workflows anyway to deal with model-baked token directives for tool calls/specific baked instruction calling sequences their local models use. Aliases is a good idea but may be outside the scope of this PR.

@matt2000
Copy link
Copy Markdown
Contributor Author

Aliases is a good idea but may be outside the scope of this PR.

100%. I have no intention of implementing that here. I'm trying to keep this PR focused on the minimum to get local models working at all.

@matt2000
Copy link
Copy Markdown
Contributor Author

@Wirasm I think this is ready to go. Can you take a look and let me know if you need anything else from me in order to get this merged?

@loopyd
Copy link
Copy Markdown

loopyd commented Apr 27, 2026

I saw some new YouTube uploads, I have a feeling there's some other priorities right now. Till this is merged I'm using your fork and tellin' my buds!

@Wirasm
Copy link
Copy Markdown
Collaborator

Wirasm commented Apr 27, 2026

Hello, sorry for the wait here, there have been a lot of PR's starting a check on this now, will test also some archon workflows on this so sorry if there is spam

@Wirasm
Copy link
Copy Markdown
Collaborator

Wirasm commented Apr 27, 2026

Multi-Agent Review Summary — PR #1284

Ran code review, test analysis, silent-failure hunt, and docs-impact across the diff. Verdict: NEEDS FIXES (small) — direction is good, scope is disciplined, but a few items should be addressed before merge.

Reference IDs: C = critical, I = important, S = suggestion (using prefixed IDs to avoid GitHub issue auto-linking).


Strengths

  • Scope is tight and matches the description: only packages/providers/src/community/pi/provider.{ts,test.ts}.
  • Replacing lookupPiModel + getModel with ModelRegistry.find is the right move — lookupPiModel was doing manual type-cast acrobatics for what the registry already provides.
  • The split between env-mapped providers (still throws, with an actionable pi /login hint) and unmapped providers (info-log + continue) cleanly captures the local-model use case without weakening fail-fast for the common cloud providers.
  • Tests are updated to match the new branches; logger mocks now reset per test.

Important Issues

I1 — Comment block at provider.ts:213 is stale after the reorder
The comment header says "3. Build AuthStorage" and uses past tense ("AuthStorage.create(), above, read ~/.pi/agent/auth.json") because the construction was moved up to step 2 (line 201). The numbered-step structure is now misleading. Either renumber the comments or merge the step 2/3 commentary into one block.

I2 — Pino log shape collides with reserved msg field at provider.ts:255-261

getLog().info(
  { piProvider: parsed.provider, msg: \`No Pi credentials found...\` },
  'pi.auth_missing'
);

The second-positional arg 'pi.auth_missing' is correct per CLAUDE.md's {domain}.{action}_{state} convention. But Pino sets msg itself from that second arg, so the msg key inside the data object collides — depending on serializer settings it either shadows Pino's msg or doubles up in the output. Move the prose into separate fields:

getLog().info(
  { piProvider: parsed.provider, envHint, loginHint },
  'pi.auth_missing'
);

The corresponding test at provider.test.ts:253-259 asserts the msg-in-data shape, so it'll pass while validating the wrong thing — update the assertion to match the corrected shape.

I3 — Class doc / inline comments still assert an "in-memory / never touches ~/.pi/" invariant that this PR breaks
provider.ts still has commentary (around the class doc and the resource-loader comment) implying the server doesn't read ~/.pi/. After this PR, every sendQuery does two synchronous disk reads (AuthStorage.create() reads auth.json; ModelRegistry.create() reads models.json). The PR description acknowledges "this was always a lie because we read auth.json," which is fair — but the docs should now reflect reality. Either:

  • update the comments to say "per-request reads of ~/.pi/agent/{auth.json,models.json}; in-memory after that", or
  • use ModelRegistry.inMemory(authStorage) (which would defeat the PR's purpose for custom-model discovery — so option 1 is preferred).

I4 — AuthStorage.create() and ModelRegistry.create() are unguarded (provider.ts:201-202)
A malformed ~/.pi/agent/auth.json (or a permissions error) will throw a raw SDK SyntaxError / EACCES with no Archon-level framing — the user sees an SDK stack with no indication that Pi credential storage init failed. CLAUDE.md "Fail Fast + Explicit Errors" calls for actionable error messages at boundaries. Wrap with try/catch and re-throw with context, or at least log pi.auth_storage_init_failed before letting it bubble.

I5 — ModelRegistry.find returning undefined throws but never logs (provider.ts:206-211)
The throw is good, but no pi.model_not_found structured event is emitted before it, breaking the _started/_failed pairing convention used elsewhere in the file. More importantly: ModelRegistry swallows models.json parse/validation errors into a loadError accessible via modelRegistry.getError(). If the user's models.json fails validation (e.g., a non-built-in provider missing apiKey), the registry silently drops all custom models and find() returns undefined — they then see "model not found" with no idea their config was rejected. Suggest:

const model = modelRegistry.find(parsed.provider, parsed.modelId);
if (!model) {
  const loadError = modelRegistry.getError?.();
  getLog().error(
    { piProvider: parsed.provider, modelId: parsed.modelId, loadError },
    'pi.model_not_found'
  );
  throw new Error(
    \`Pi model not found: provider='\${parsed.provider}' model='\${parsed.modelId}'.\${loadError ? \` Note: ~/.pi/agent/models.json load failed — \${loadError}\` : ''} See https://github.com/badlogic/pi-mono/blob/main/packages/ai/src/models.generated.ts for the catalog.\`
  );
}

This addresses the "why doesn't my LM Studio model show up" debugging path that several PR commenters are about to hit.

I6 — Per-request disk I/O on a hot path (provider.ts:201-202)
AuthStorage.create() + ModelRegistry.create() run on every sendQuery, doing two synchronous filesystem reads per request. The PR description proposes addressing this in a follow-up if it becomes a performance problem, which is a reasonable YAGNI call — but worth noting that a per-instance cache (set once in the constructor) would be a ~3-line change and eliminate the regression from the previous inMemory path. Acceptable as-is given the stated rollback path.

I7 (tests) — No assertion that ModelRegistry.create receives the authStorage instance
This is the structural guarantee that the headline fix works (the registry needs to be paired with an AuthStorage that knows about the user's config). The tests currently mock both but never assert the wiring. A 2-line addition:

expect(mockModelRegistryCreate).toHaveBeenCalledWith(authStorageInstance);

catches any future refactor that accidentally drops the argument.

I8 (tests) — No coverage for AuthStorage.create() / ModelRegistry.create() throwing
The construction was moved earlier in the function; the failure path is unguarded (see I4) and untested. One mockImplementationOnce test would close this.


Documentation

I9 — packages/docs-web/src/content/docs/getting-started/ai-assistants.md is silent on the new credential-free path
The "Authenticate" section (around L242) presents auth as the only setup topic for Pi, which was true before this PR. After merge:

  • Mapped providers (anthropic, openai, …) still throw on missing creds — docs accurate.
  • Unmapped providers (LM Studio, ollama, llamacpp, custom OpenAI-compatible endpoints in ~/.pi/agent/models.json) now log info and continue — undocumented.

Suggested edits:

  • Lead sentence (~L232): append "…and local inference (LM Studio, ollama, llamacpp via ~/.pi/agent/models.json)."
  • Before the env-var table (~L242): add "Credentials are not required for local or custom providers configured in ~/.pi/agent/models.json — Archon logs an info notice and continues."
  • Line ~L265: split "unmapped" into "unmapped cloud backends (file an issue for env-var shortcuts)" vs "local/custom providers (no Archon-side config needed beyond the model ref)".

Suggestions

S1 — info vs warn for pi.auth_missing on unmapped providers
For a genuinely local provider (LM Studio, ollama) this is expected steady state, so info is defensible. But CodeRabbit and the silent-failure pass both flagged it — for an unmapped provider that does require auth (a less-common backend not in the env-var table), this becomes the only breadcrumb between the missing creds and a cryptic SDK-layer 401 later. warn would better reflect "unusual but not necessarily fatal." Author's call.

S2 — Env-mapped throw at provider.ts:244-249 doesn't log before throwing
Minor pairing-convention breach. Caller surfaces the message anyway, so low urgency.


Notes on the stalsma report in PR comments

The user reporting that LM Studio still fails with "model not found" after this PR has a models.json config with apiKey: \"lm-studio\" set, which would pass the SDK's validateConfig — so their report is most likely either (a) tested against dev, not the PR head, or (b) a Windows-specific path issue with getAgentDir(). Other commenters (b1skit, loopyd) report the PR works for them. Not blocking — but I5 above would surface the underlying "models.json failed validation" case clearly if any user does hit it.


Recommended Action Order

  1. Fix I2 (log shape) and I5 (surface loadError in model-not-found) — highest user-debuggability impact.
  2. Fix I3 (update stale "in-memory" docs) — accuracy.
  3. Fix I1 (renumber/merge stale comment block) and I4 (guard SDK init).
  4. Add tests I7 + I8.
  5. Update docs I9 in this PR or a follow-up.

Direction is solid and the community demand is real — happy to re-review after fixes.

Wirasm added 2 commits April 27, 2026 12:31
Address review feedback on the ModelRegistry migration:

- Surface modelRegistry.getError() in the "model not found" message and
  emit a pi.model_not_found structured event. ModelRegistry catches
  models.json parse/validation errors into an internal loadError rather
  than throwing, so a malformed entry (e.g. custom provider missing
  baseUrl) was silently degrading to a vague "model not found" with no
  hint at the actual cause.
- Wrap AuthStorage.create() and ModelRegistry.create() in try/catch and
  rethrow with Pi-framed context. A malformed ~/.pi/agent/auth.json
  was previously surfacing as a raw SDK SyntaxError stack with no
  indication of where it came from.
- Renumber the Step 2 / Step 3 / Step 4 comment block to match the
  actual code order after auth+registry construction was hoisted earlier
  in the function.
- Move the missing-credentials log message out of the Pino-reserved
  msg field (which would collide with the log line message) into
  separate envHint/loginHint structured fields. The pi.auth_missing
  event name stays as the second-positional arg per the
  {domain}.{action}_{state} convention.

Tests:
- Update the existing auth_missing assertion to match the new log shape.
- Add a test that ModelRegistry.create receives the AuthStorage instance
  (the wiring that makes the headline fix work for custom models).
- Add a test that AuthStorage.create() failure surfaces as
  pi.auth_storage_init_failed with the underlying error message.
- Add a test that models.json load errors flow into the not-found
  message and the pi.model_not_found structured log.
After the ModelRegistry migration, providers not in PI_PROVIDER_ENV_VARS
(LM Studio, ollama, llamacpp, custom OpenAI-compatible endpoints
configured in ~/.pi/agent/models.json) work without any Archon-side
credentials — Archon logs a pi.auth_missing info notice and continues.
Add a "Local / custom providers" subsection under Authenticate so users
find this path without reading the source. Also call it out in the
section opener alongside the cloud backends.
@Wirasm
Copy link
Copy Markdown
Collaborator

Wirasm commented Apr 27, 2026

Pushed two follow-up commits addressing the review (maintainer edit, since the PR allows it):

d4fedc02 — fix(pi): surface models.json errors and guard auth/registry init

  • I5: Surface modelRegistry.getError() in the not-found message + emit pi.model_not_found event so users debugging custom-provider configs (LM Studio, etc.) see the real cause instead of a vague "model not found".
  • I4: Wrap AuthStorage.create() / ModelRegistry.create() in try/catch — malformed ~/.pi/agent/auth.json no longer surfaces as a raw SDK SyntaxError.
  • I1: Renumber Step 2 / Step 3 / Step 4 comments to match the actual code order after the registry/auth construction was hoisted earlier.
  • I2: Move missing-credentials prose out of the Pino-reserved msg field into separate envHint/loginHint structured fields. Event name pi.auth_missing stays as the second-positional arg per the project's {domain}.{action}_{state} convention.
  • Tests: updated existing assertion to match new shape; added three new tests — ModelRegistry.create receives the AuthStorage instance (the headline-fix wiring), AuthStorage.create() throwing surfaces with context, and models.json load errors flow into the not-found message.

14ba080b — docs(pi): document credential-free path for local providers

  • I9: Added "Local / custom providers (no credentials needed)" subsection under Authenticate in ai-assistants.md, plus a mention in the section opener. Users running LM Studio / ollama / llamacpp via ~/.pi/agent/models.json shouldn't need to read the source to discover this works.

Deferred (per author's call):

  • I6 (per-request disk I/O caching) — author flagged this as a possible follow-up in the PR description; agreeing this is YAGNI for now.
  • S1 (infowarn for unmapped-provider auth_missing) — kept as info since for genuinely-local providers this is steady state. The new structured fields make either level greppable.
  • S2 (logging on env-mapped throw) — minor pairing-convention nit, left alone.

bun run validate clean (type-check + lint + format + tests across all packages). Mind giving the new diff a look?

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
packages/providers/src/community/pi/provider.test.ts (1)

389-402: ⚠️ Potential issue | 🟡 Minor

Stale comment: the adapter no longer "rejects unknown providers."

Inline comment claims 'nonexistent' is handled in mockModelRegistryFind to return undefined, but the adapter rejects unknown providers. After this PR's pi.auth_missing change, the adapter no longer rejects unknown providers — it logs and continues — so the rationale for switching to a known-provider/unknown-model is now "to bypass the credential-missing log path," not "to avoid the unknown-provider rejection." Worth updating so the next reader doesn't assume the old behavior still exists.

📝 Suggested rewrite
-    // 'nonexistent' is handled in mockModelRegistryFind to return undefined, but
-    // the adapter rejects unknown providers. To exercise
-    // the not-found branch, use a known provider but unknown modelId by
-    // temporarily swapping mockModelRegistryFind to always return undefined.
+    // Use a known provider (`google`, with GEMINI_API_KEY set) so the
+    // credential path resolves cleanly and we exercise *only* the
+    // ModelRegistry.find()-returns-undefined branch.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/providers/src/community/pi/provider.test.ts` around lines 389 - 402,
The test comment is outdated about adapter behavior; update the inline comment
in packages/providers/src/community/pi/provider.test.ts to reflect that after
the pi.auth_missing change the adapter no longer rejects unknown providers but
logs and continues. Specifically, revise the comment around
mockModelRegistryFind.mockImplementationOnce and the call to new
PiProvider().sendQuery(...) to state the swap is done to bypass the
credential-missing log path (not to avoid an unknown-provider rejection), and
mention mockModelRegistryFind and PiProvider.sendQuery by name so readers can
locate the affected test logic.
packages/providers/src/community/pi/provider.ts (1)

251-457: ⚠️ Potential issue | 🟡 Minor

Section comment numbering has duplicate "4." and "5." after the registry/lookup insertions.

The new sections "2. Build AuthStorage + ModelRegistry" (line 201) and "3. Look up the model" (line 227) were inserted, but the downstream comment numbering wasn't bumped, so the file now reads 1, 2, 3, 4, 4, 5, 5, 6 (Resolve creds → Translate nodeConfig → Session management → Structured output → Bridge events) and the inner sub-bullets 4a–4f straddle two different "4." sections. This was specifically called out as a needed fix during prior review.

♻️ Suggested renumbering
-    // 4. Resolve credentials. authStorage already loaded ~/.pi/agent/auth.json
+    // 4. Resolve credentials. authStorage already loaded ~/.pi/agent/auth.json
@@
-    // 4. Translate Archon nodeConfig to Pi SDK options. All three translations
+    // 5. Translate Archon nodeConfig to Pi SDK options. All three translations
@@
-    //    4a. thinkingLevel: covers `thinking`/`effort` nodeConfig fields.
+    //    5a. thinkingLevel: covers `thinking`/`effort` nodeConfig fields.
@@
-    //    4b. tools: covers allowed_tools / denied_tools. `undefined` leaves Pi
+    //    5b. tools: covers allowed_tools / denied_tools. `undefined` leaves Pi
@@
-    //    4c. systemPrompt: request-level (AgentRequestOptions) wins over
+    //    5c. systemPrompt: request-level (AgentRequestOptions) wins over
@@
-    //    4d. skills: Archon uses name references (e.g. `skills: [agent-browser]`).
+    //    5d. skills: Archon uses name references (e.g. `skills: [agent-browser]`).
@@
-    // 5. Session management. Pi stores each session as a JSONL file under
+    // 6. Session management. Pi stores each session as a JSONL file under
@@
-    // 4e. Extension flag pass-through. Must happen before bindExtensions
+    // 6a. Extension flag pass-through. Must happen before bindExtensions
@@
-    // 4f. Bind UI context (so ctx.hasUI is true and ctx.ui.notify() forwards
+    // 6b. Bind UI context (so ctx.hasUI is true and ctx.ui.notify() forwards
@@
-    // 5. Structured output (best-effort). Pi has no SDK-level JSON schema
+    // 7. Structured output (best-effort). Pi has no SDK-level JSON schema
@@
-    // 6. Bridge callback-based events to the async generator contract.
+    // 8. Bridge callback-based events to the async generator contract.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/providers/src/community/pi/provider.ts` around lines 251 - 457, The
section comment numbers are out of sync after inserting "2. Build AuthStorage +
ModelRegistry" and "3. Look up the model"; update the subsequent top-level
comment numbers and their sublabels so they increment correctly (so "Resolve
credentials" becomes 4, "Translate Archon nodeConfig to Pi SDK options" becomes
5 with its sublabels 5a–5f, "Session management" becomes 6, "Structured output"
becomes 7, and "Bridge callback..." becomes 8). Edit the inline numbered comment
prefixes around the blocks that call resolvePiThinkingLevel, resolvePiTools,
resolvePiSkills, resolvePiSession, createAgentSession, and
augmentPromptForJsonSchema so the labels and any references (e.g., "4a–4f",
"5.", "6.") match the new numbering.
🧹 Nitpick comments (1)
packages/providers/src/community/pi/provider.ts (1)

213-225: Optional: cache AuthStorage / ModelRegistry per Pi-config-dir to avoid disk I/O on every sendQuery().

Both AuthStorage.create() and ModelRegistry.create() synchronously read ~/.pi/agent/{auth.json,models.json} on every call. The inline comment correctly justifies why you don't cache (users edit those files between calls and OAuth refresh rewrites auth.json under a file lock), but in workflow runs with many sequential Pi nodes this is N redundant reads per invocation. A simple TTL cache (e.g. 1–2 s) keyed on process.env.PI_CODING_AGENT_DIR ?? '~/.pi/agent' would preserve the "edit-and-rerun" UX while collapsing tight bursts. Treat this as a follow-up — not a blocker.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/providers/src/community/pi/provider.ts` around lines 213 - 225,
AuthStorage.create() and ModelRegistry.create() are being called on every
sendQuery() causing redundant disk reads; add a small TTL cache keyed by the Pi
agent directory (process.env.PI_CODING_AGENT_DIR ?? '~/.pi/agent') that returns
cached instances of AuthStorage and ModelRegistry for e.g. 1–2 seconds, and have
sendQuery() (or the code that currently calls piCodingAgent.AuthStorage.create /
piCodingAgent.ModelRegistry.create) use the cache instead of creating new
instances each time; ensure the cache still re-reads after TTL expires so edits
or OAuth refreshes are picked up and preserve existing error handling around
pi.auth_storage_init_failed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/providers/src/community/pi/provider.ts`:
- Around line 213-217: The code defensively uses optional chaining when invoking
modelRegistry.getError (calling modelRegistry.getError?.()), but
ModelRegistry.create guarantees a public getError(): string | undefined method,
so remove the unnecessary optional chaining and call modelRegistry.getError()
directly; update the invocation where modelRegistry.getError?.() is used (in the
block after ModelRegistry.create) to modelRegistry.getError() to match the
documented API and existing tests.

---

Outside diff comments:
In `@packages/providers/src/community/pi/provider.test.ts`:
- Around line 389-402: The test comment is outdated about adapter behavior;
update the inline comment in
packages/providers/src/community/pi/provider.test.ts to reflect that after the
pi.auth_missing change the adapter no longer rejects unknown providers but logs
and continues. Specifically, revise the comment around
mockModelRegistryFind.mockImplementationOnce and the call to new
PiProvider().sendQuery(...) to state the swap is done to bypass the
credential-missing log path (not to avoid an unknown-provider rejection), and
mention mockModelRegistryFind and PiProvider.sendQuery by name so readers can
locate the affected test logic.

In `@packages/providers/src/community/pi/provider.ts`:
- Around line 251-457: The section comment numbers are out of sync after
inserting "2. Build AuthStorage + ModelRegistry" and "3. Look up the model";
update the subsequent top-level comment numbers and their sublabels so they
increment correctly (so "Resolve credentials" becomes 4, "Translate Archon
nodeConfig to Pi SDK options" becomes 5 with its sublabels 5a–5f, "Session
management" becomes 6, "Structured output" becomes 7, and "Bridge callback..."
becomes 8). Edit the inline numbered comment prefixes around the blocks that
call resolvePiThinkingLevel, resolvePiTools, resolvePiSkills, resolvePiSession,
createAgentSession, and augmentPromptForJsonSchema so the labels and any
references (e.g., "4a–4f", "5.", "6.") match the new numbering.

---

Nitpick comments:
In `@packages/providers/src/community/pi/provider.ts`:
- Around line 213-225: AuthStorage.create() and ModelRegistry.create() are being
called on every sendQuery() causing redundant disk reads; add a small TTL cache
keyed by the Pi agent directory (process.env.PI_CODING_AGENT_DIR ??
'~/.pi/agent') that returns cached instances of AuthStorage and ModelRegistry
for e.g. 1–2 seconds, and have sendQuery() (or the code that currently calls
piCodingAgent.AuthStorage.create / piCodingAgent.ModelRegistry.create) use the
cache instead of creating new instances each time; ensure the cache still
re-reads after TTL expires so edits or OAuth refreshes are picked up and
preserve existing error handling around pi.auth_storage_init_failed.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: df057206-81aa-4845-9c20-fa255cf9d44a

📥 Commits

Reviewing files that changed from the base of the PR and between f3d1657 and 14ba080.

📒 Files selected for processing (3)
  • packages/docs-web/src/content/docs/getting-started/ai-assistants.md
  • packages/providers/src/community/pi/provider.test.ts
  • packages/providers/src/community/pi/provider.ts

Comment on lines +213 to +217
let authStorage: ReturnType<typeof piCodingAgent.AuthStorage.create>;
let modelRegistry: ReturnType<typeof piCodingAgent.ModelRegistry.create>;
try {
authStorage = piCodingAgent.AuthStorage.create();
modelRegistry = piCodingAgent.ModelRegistry.create(authStorage);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Look for the ModelRegistry definition / getError member in any installed pi-coding-agent typings.
fd -t f -i 'model-?registry' --exec rg -n -C3 'getError|loadError|class ModelRegistry|^\s*(static\s+)?create\b' {} \;
fd -t f -i 'pi-coding-agent' -e d.ts --exec rg -n -C2 'getError|loadError|ModelRegistry' {} \;

Repository: coleam00/Archon

Length of output: 41


🌐 Web query:

@mariozechner/pi-coding-agent ModelRegistry getError loadError API

💡 Result:

In the @mariozechner/pi-coding-agent package (part of the badlogic/pi-mono GitHub repository), the ModelRegistry class manages loading and caching models from built-in sources and a custom models.json file. It includes a private loadError field that stores any error encountered during loading custom models. The relevant API methods are: - ModelRegistry.create(authStorage: AuthStorage, modelsJsonPath?: string): ModelRegistry - Creates a new instance, optionally loading from a custom models.json path. - ModelRegistry.inMemory(authStorage: AuthStorage): ModelRegistry - Creates without loading from disk. - refresh: void - Reloads models from disk, clears caches, resets loadError to undefined, then calls loadModels. - getError: string | undefined - Returns the loadError if models.json failed to load (undefined if no error or after successful refresh). The loadError is set in the private loadModels method if this.loadCustomModels(this.modelsJsonPath) returns an error. Even if custom models fail, built-in models are retained. {value: In the @mariozechner/pi-coding-agent package (part of the badlogic/pi-mono GitHub repository), the ModelRegistry class manages loading and caching models from built-in sources and a custom models.json file. It includes a private loadError field that stores any error encountered during loading custom models. The relevant API methods are: - ModelRegistry.create(authStorage: AuthStorage, modelsJsonPath?: string): ModelRegistry - Creates a new instance, optionally loading from a custom models.json path. - ModelRegistry.inMemory(authStorage: AuthStorage): ModelRegistry - Creates without loading from disk. - refresh: void - Reloads models from disk, clears caches, resets loadError to undefined, then calls loadModels. - getError: string | undefined - Returns the loadError if models.json failed to load (undefined if no error or after successful refresh). The loadError is set in the private loadModels method if this.loadCustomModels(this.modelsJsonPath) returns an error. Even if custom models fail, built-in models are retained., citations: [2,12], confidence: high}

Citations:


🏁 Script executed:

# Check the provider.ts file around lines 213-217 and 233
cat -n packages/providers/src/community/pi/provider.ts | sed -n '210,240p'

Repository: coleam00/Archon

Length of output: 1751


🏁 Script executed:

# Check the test file around lines 317-320
cat -n packages/providers/src/community/pi/provider.test.ts | sed -n '310,330p'

Repository: coleam00/Archon

Length of output: 1246


Remove unnecessary optional chaining from getError() call.

Line 233 calls modelRegistry.getError?.() defensively, but getError() is a documented public method on ModelRegistry that always exists (returns string | undefined). The optional chaining is unnecessary; call it directly as modelRegistry.getError(). The test correctly shows this expectation (line 319).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/providers/src/community/pi/provider.ts` around lines 213 - 217, The
code defensively uses optional chaining when invoking modelRegistry.getError
(calling modelRegistry.getError?.()), but ModelRegistry.create guarantees a
public getError(): string | undefined method, so remove the unnecessary optional
chaining and call modelRegistry.getError() directly; update the invocation where
modelRegistry.getError?.() is used (in the block after ModelRegistry.create) to
modelRegistry.getError() to match the documented API and existing tests.

@Wirasm Wirasm merged commit 686bec6 into coleam00:dev Apr 27, 2026
4 checks passed
popemkt added a commit to popemkt/Archon that referenced this pull request Apr 27, 2026
Resolves the conflict on packages/providers/src/community/pi/provider.ts.

- Upstream's coleam00#1284 (ModelRegistry) and our shared/structured-output
  extraction both touch the same region. Upstream removed the inline
  augmentPromptForJsonSchema call site that coleam00#1284 didn't itself need;
  our branch had moved that function to shared/. Resolution keeps the
  shared/ extraction (single source of truth for both Pi and Copilot)
  and re-exports it from pi/provider.ts under the original name so
  existing Pi callers and tests stay byte-for-byte.
- Drops the dead-code lookupPiModel/GetModelFn helper that was a stale
  leftover from an earlier merge attempt — never had a caller and was
  superseded by ModelRegistry upstream.
- Picks up coleam00#1431 — moves e2e-copilot-smoke.yaml under test-workflows/
  alongside the other e2e-*.yaml smokes.

Adds e2e-copilot-all-features smoke (mirrors e2e-minimax-smoke):
  basic chat (PONG) + effort: high (17×23 = 391) + denied_tools: [shell,
  write] (DENIED_OK) + output_format JSON (best-effort via shared/
  structured-output, parsed as {model, ok}). Single bash assert verifies
  all four end-to-end. Doubles as adoption docs.

Validation:
- bun run check:bundled, type-check, lint, format:check — all green.
- bun --filter @archon/providers test — fully green (Pi included).
- Live smoke (Linux + gpt-5-mini, Copilot CLI 1.0.36):
    e2e-copilot-smoke           → 12s, PASS
    e2e-copilot-all-features    → 25s, PASS (all four caps)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@Wirasm Wirasm mentioned this pull request Apr 29, 2026
Wirasm added a commit that referenced this pull request Apr 29, 2026
* chore: update Homebrew formula for v0.3.9

* chore(release-skill): use --help (not version) for Step 1.5 smoke probe (#1359)

The pre-flight binary smoke does a bare `bun build --compile` — it
deliberately skips `scripts/build-binaries.sh` to stay fast. That means
packages/paths/src/bundled-build.ts retains its dev defaults, including
BUNDLED_IS_BINARY = false.

version.ts branches on BUNDLED_IS_BINARY: when true it returns the
embedded string; when false it calls getDevVersion(), which reads
package.json at `SCRIPT_DIR/../../../../package.json`. Inside a compiled
binary SCRIPT_DIR resolves under `$bunfs/root/`, the walk produces a CWD-
relative path that doesn't exist, and the smoke aborts with "Failed to
read version: package.json not found" — a false positive.

Hit during the 0.3.8 release attempt: the real Pi lazy-load fix was
working end-to-end; the smoke test was the only thing failing.

Use --help instead. It exercises the same module-init graph (so it still
catches the real failure modes the skill lists — Pi package.json init
crash, Bun --bytecode bugs, CJS wrapper issues, circular imports under
minify) but has no dev/binary branch, so no false positive.

Also add a longer comment block explaining why --help is preferred, so
this doesn't get "normalized" back to `version` by a future drive-by.

* chore(test-release-skill): preserve archon-stable across test cycles

The brew path of /test-release runs `brew uninstall` in Phase 5 to leave the
system in its pre-test state. For operators using the dual-homebrew pattern
(renamed brew binary at `/opt/homebrew/bin/archon-stable` so it coexists with
a `bun link` dev `archon`), that uninstall wipes the Cellar dir the
`archon-stable` symlink points into → `archon-stable` becomes dangling →
`brew cleanup` sweeps it away on the next brew op. Next time the operator
wants stable, they have to manually re-run `brew-upgrade-archon`.

Fix: make the skill aware of `archon-stable` and restore it transparently.

- Phase 2 item 4: detect the `archon-stable` symlink before any brew op;
  export `ARCHON_STABLE_WAS_INSTALLED=yes` so Phase 5 knows to restore it.
  Only triggers for the brew path (curl-mac/curl-vps don't touch brew so
  they leave `archon-stable` alone).
- Phase 5 brew path: after `brew uninstall + untap`, if the flag was set,
  re-tap + re-install + rename. Verifies the restored `archon-stable`
  reports a version and warns (non-fatal) if the rename target is missing.
  Documents the tradeoff: the restored version is "whatever the tap ships
  today", not necessarily the pre-test version — usually that's what the
  operator wants (the release they just tested becomes stable) but the
  back-version-QA case requires a manual `brew-upgrade-archon` after.
- Phase 1 confirmation banner now mentions that `archon-stable` will be
  preserved so the operator isn't surprised by the reinstall during Phase 5.

No changes to curl-mac/curl-vps paths. No changes to Phase 4 test suite.

* fix(providers/pi): install PI_PACKAGE_DIR shim so Pi workflows run in a compiled binary (#1360)

v0.3.9 made Pi boot-safe: lazy-loading its imports meant `archon version`
no longer crashed on `@mariozechner/pi-coding-agent/dist/config.js`'s
module-init `readFileSync(getPackageJsonPath())`. That's what the
`provider-lazy-load.test.ts` regression test guards.

The fix was only half the problem though. When a Pi workflow actually
runs, sendQuery() triggers the dynamic import — and Pi's config.js
module-init fires then, hitting the exact same ENOENT on
`dirname(process.execPath)/package.json`. Discovered by running
`archon workflow run test-pi` against a locally-compiled 0.3.9 binary:

    [main] Failed: ENOENT: no such file or directory,
           open '/private/tmp/package.json'
        at readFileSync (unknown)
        at <anonymous> (/$bunfs/root/archon-providertest:184:7889)
        at init_config

Boot-safe ≠ runtime-safe. The `/test-release` run for 0.3.9 passed
because it only exercised `archon-assist` (Claude); Pi was never
actually invoked on the released binary.

Fix: before the dynamic `import('@mariozechner/pi-coding-agent')` in
sendQuery, install a PI_PACKAGE_DIR shim. Pi's config.js checks
`process.env.PI_PACKAGE_DIR` first in its `getPackageDir()` and
short-circuits the `dirname(process.execPath)` walk. We write a
minimal `{name, version, piConfig:{}}` stub to
`tmpdir()/archon-pi-shim/package.json` (idempotent — existsSync check)
and set the env var. Pi only reads `piConfig.name`, `piConfig.configDir`,
and `version` from that file, all optional, so the stub surface is
genuinely minimal.

Localized to PiProvider: no global state, no mutation of any shared
config, no upstream fork. Claude and Codex providers are unaffected
(their SDKs don't have this class of module-init side effect).

Verified end-to-end: built a compiled archon binary with this patch,
ran `archon workflow run test-pi --no-worktree` (Pi workflow with
model `anthropic/claude-haiku-4-5`), got a clean response. Before the
patch, same binary crashed at `dag_node_started` with the ENOENT above.

Regression test added: asserts `PI_PACKAGE_DIR` is set after sendQuery
hits even its fast-fail "no model" path. Together with the existing
`provider-lazy-load.test.ts` (boot-safe) this covers both halves.

* feat(providers): autodetect canonical binary install paths for Claude and Codex (#1361)

Both binary resolvers previously stopped at env-var + explicit config and
threw a "not found" error when neither was set. Users who followed the
upstream-recommended install flow (Anthropic's `curl install.sh` for
Claude, `npm install -g @openai/codex`) still had to manually set either
`CLAUDE_BIN_PATH` / `CODEX_BIN_PATH` or the corresponding config field
before any workflow could run.

Add a tier-N autodetect step between the explicit config tier and the
install-instructions throw. Purely additive: env and config still win
when set (precedence covered by new tests). On autodetect miss, the same
install-instructions error fires as before.

Claude probe list (verified against docs.claude.com "Uninstall Claude
Code → Native installation" section):
  - $HOME/.local/bin/claude            (mac/linux native installer)
  - $USERPROFILE\.local\bin\claude.exe (Windows native installer)

Codex probe list (verified against openai/codex README; npm global-
install puts the binary at `{npm_prefix}/bin/<name>` on POSIX,
`{npm_prefix}\<name>.cmd` on Windows):
  - $HOME/.npm-global/bin/codex   (user-set `npm config set prefix`)
  - /opt/homebrew/bin/codex       (mac arm64 with homebrew-node)
  - /usr/local/bin/codex          (mac intel / linux system node)
  - %APPDATA%\npm\codex.cmd       (Windows npm global default)
  - $HOME\.npm-global\codex.cmd   (Windows user-set prefix)

Not probed (explicit override still required):
  - Custom npm prefixes — `npm root -g` would need a subprocess per
    resolve, too much surface for a probe helper
  - `brew install --cask codex` — cask layout isn't a PATH binary
  - Manual GitHub Releases extracts — placement is user-determined
  - `~/.bun/bin/codex` — not documented in openai/codex README

Pi provider intentionally has no equivalent change: the Pi SDK is
bundled into the archon binary (no subprocess), so there's no "binary"
to resolve. Pi auth lives at `~/.pi/agent/auth.json` which the SDK
already finds by default, and the PR A shim (`PI_PACKAGE_DIR`) handles
the package-dir case via Pi's own documented escape hatch.

E2E verified: removed both config entries from ~/.archon/config.yaml,
rebuilt compiled binary, ran `archon workflow run archon-assist` and a
Codex workflow. Logs showed `source: 'autodetect'` for both, responses
returned cleanly.

* fix(providers/test): use os.homedir() instead of $HOME in claude binary autodetect test

The native-installer autodetect test computed its expected path from
process.env.HOME, but the implementation uses node:os homedir(). On
Windows, HOME is typically unset (Windows uses USERPROFILE), so the
test fell back to '/Users/test' while the resolver returned the real
home dir — making the spy's path-equality check fail and breaking CI
on windows-latest.

Mirror the implementation by importing homedir() from node:os and
joining with node:path so the expected path matches the actual
platform-resolved home and separator.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(server): contain Discord login failure so it doesn't kill the server (#1365)

Reported in #1365: a user running `archon serve` with DISCORD_BOT_TOKEN
set but the "Message Content Intent" toggle disabled in the Discord
Developer Portal saw the entire server crash with `Used disallowed
intents`. Discord rejects the gateway connection (close code 4014) when
a privileged intent is requested without being enabled, and the
unguarded `await discord.start()` propagated the error all the way up,
taking the web UI down with it.

Wrap discord.start() in try/catch — log the failure with an actionable
hint (special-cased for the disallowed-intent error) and continue
running. Other adapters and the web UI come up regardless. The shutdown
handler already uses optional chaining (`discord?.stop()`) so nulling
discord after a failed start is safe.

Other adapters (Telegram, Slack, GitHub, Gitea, GitLab) have the same
unguarded-start pattern but are out of scope for this fix — addressing
them is tracked separately.

Also expanded the Discord setup docs with a caution callout that names
the exact error string and the new log event so users can grep for
both.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* docs(script-nodes): dedicated guide + teach the archon skill (#1362)

* docs(script-nodes): add dedicated guide and teach the archon skill how to write them

Script nodes (script:) have been a first-class DAG node type since v0.3.3 but
were documented only as one-liners in CLAUDE.md and a CI smoke test. Claude
Code reading the archon skill would see "Four Node Types: command, prompt,
bash, loop" and reach for bash+node/python one-liners instead of a proper
script node — losing bun's --no-env-file isolation, uv's --with dependency
pins, and the .archon/scripts/ reuse story.

- New packages/docs-web/src/content/docs/guides/script-nodes.md mirroring the
  structure of loop-nodes.md / approval-nodes.md: schema, inline vs named
  dispatch, runtime/deps semantics, scripts directory precedence (repo > home),
  extension-runtime mapping, env isolation, stdout/stderr contract, patterns,
  and the explicit list of ignored AI fields.
- guides/authoring-workflows.md and guides/index.md updated so the new guide is
  discoverable from both the node-types table and the guides landing page.
- reference/variables.md calls out the no-shell-quote difference between
  bash: and script: substitution — a subtle correctness trap when adapting a
  bash pattern into a script node.
- Sidebar order bumped +1 on hooks/mcp-servers/skills/global-workflows/
  remotion-workflow to slot script-nodes at order 5 next to the other
  node-type guides.

- .claude/skills/archon/SKILL.md: replaces stale "Four Node Types" (which
  also silently omitted approval and cancel) with the accurate seven, with a
  script-node code block showing both inline and named patterns.
- references/workflow-dag.md: full Script Node section covering dispatch,
  resolution, deps, stdout contract, and the list of AI-only fields that are
  ignored; validation-rules list updated.
- references/dag-advanced.md and references/variables.md: retry-support line
  corrected; no-shell-quote note added.
- examples/dag-workflow.yaml: added an extract-labels TypeScript script node
  and updated the header comment.

* fix(docs): review follow-ups for script-node guide

- skills example: extract-labels was reading process.env.ISSUE_JSON which is
  never set; use String.raw`$fetch-issue.output` so the upstream bash node's
  JSON is actually consumed
- guides/script-nodes.md + skills/workflow-dag.md: idle_timeout is accepted
  but ignored on script (and bash) nodes — executeScriptNode only reads
  node.timeout. Clarify that script/bash use `timeout`, not idle_timeout
- archon-workflow-builder.yaml: prompt enumerated only bash/prompt/command/loop,
  so the AI builder could never propose script or approval nodes. Add both
  (plus examples + rule about script output not being shell-quoted) and
  regenerate bundled defaults
- book/dag-workflows.md + book/quick-reference.md + adapters/web.md: fill in
  the node-type references that were missing script, approval, and cancel.
  adapters/web.md also overclaimed "loop" in the palette — NodePalette.tsx
  only drags command/prompt/bash, so note that the other kinds are YAML-only

* docs/skill: general hardening — fix inaccuracies, fill workflow/CLI/env gaps, add good-practices + troubleshooting (#1363)

* fix(skill/when): document the full `when:` operator set and compound expressions

The skill reference previously stated "operators: ==, != only" which is
materially wrong — the condition evaluator supports ==, !=, <, >, <=, >=
plus && / || compound expressions with && binding tighter than ||, plus
dot-notation JSON field access. An agent authoring a workflow from the
skill would think half the operators don't exist.

Replaces the single-sentence section with a structured reference covering:
- All six comparison operators (string and numeric modes)
- Compound expressions with precedence rules and short-circuit eval
- JSON dot notation semantics and failure modes
- The fail-closed rules in full (invalid expression, non-numeric side,
  missing field, skipped upstream)

Grounded in packages/workflows/src/condition-evaluator.ts.

* feat(skill): document Approval and Cancel node types

Approval and cancel nodes are first-class DAG node types (approval since the
workflow lifecycle work in #871, cancel as a guarded-exit primitive) but the
skill never described either one. An agent reading the skill and asked to
"add a review gate before implementation" or "stop the workflow if the input
is unsafe" would fall back to bash + exit 1, losing the proper semantics
(cancelled vs. failed, on_reject AI rework, web UI auto-resume).

Approval node coverage (references/workflow-dag.md, SKILL.md):
- Full configuration block with message, capture_response, on_reject
- The interactive: true workflow-level requirement for web UI delivery
- Approve/reject commands across all platforms (CLI, slash, natural
  language) and the capture_response → $node-id.output flow
- Ignored-fields list + the on_reject.prompt AI sub-node exception

Cancel node coverage (references/workflow-dag.md, SKILL.md):
- Single-field schema (cancel: "<reason>")
- Lifecycle: cancelled (not failed); in-flight parallel nodes stopped;
  no DAG auto-resume path
- The "cancel: vs bash-exit-1" decision rule (expected precondition miss
  vs. check itself failing)
- Two canonical patterns — upstream-classification gate, pre-expensive-step
  gate

Validation-rules list updated to enumerate approval/cancel constraints
(message non-empty, on_reject.max_attempts range 1-10, cancel reason
non-empty), plus a forward note that script: joins the mutually-exclusive
set once PR #1362 lands.

Placement in both files is after the Loop section and before the validation
section, so this commit stays additive with respect to PR #1362's Script
node insertion between Bash and Loop — rebase is clean.

* feat(skill): document workflow-level fields beyond name/provider/model

The skill's Schema section previously showed only name, description, provider,
and model at the workflow level — which is most of a stub. Agents asked to
"use the 1M-context Claude beta" or "run this under a network sandbox" or
"add a fallback model in case Opus rate-limits" had no way to discover
that any of these fields existed at the workflow level.

Adds a comprehensive Workflow-Level Fields section covering:
- Core: name, description, provider, model, interactive (with explicit
  callout that interactive: true is REQUIRED for approval/loop gates on
  web UI — a common footgun)
- Isolation: worktree.enabled for pin-on/pin-off (the only worktree field
  at workflow level; baseBranch/copyFiles/path/initSubmodules are
  config.yaml only, so a cross-reference points there)
- Claude SDK advanced: effort, thinking, fallbackModel, betas, sandbox,
  with explicit per-node-only exceptions (maxBudgetUsd, systemPrompt)
- Codex-specific: modelReasoningEffort (with note that it's NOT the same
  as Claude's effort — this has confused users), webSearchMode,
  additionalDirectories
- A complete worked example combining sandbox + approval + interactive

All fields cross-referenced against packages/workflows/src/schemas/workflow.ts
and packages/workflows/src/schemas/dag-node.ts.

* feat(skill/loop): document interactive loops and gate_message

Interactive loop nodes pause between iterations for human feedback via
/workflow approve — used by archon-piv-loop and archon-interactive-prd.
The skill's Loop Nodes section previously omitted both interactive: true
and gate_message entirely, so an agent writing a guided-refinement
workflow wouldn't know the feature exists or that gate_message is
required at parse time.

Adds:
- interactive and gate_message rows to the config table (marking
  gate_message as required when interactive: true — enforced by the
  loader's superRefine)
- A dedicated "Interactive Loops" subsection explaining the 6-step
  iterate-pause-approve-resume flow
- Explicit call-out that $LOOP_USER_INPUT populates ONLY on the first
  iteration of a resumed session — easy to miss and a common surprise
- Workflow-level interactive: true requirement for web UI delivery
  (loader warning otherwise) so the full-flow example is complete
- Note that until_bash substitution DOES shell-quote $nodeId.output
  (unlike script bodies) — called out since the audit surfaced this
  inconsistency

* fix(skill/cli): complete the CLI command reference with missing lifecycle commands

The CLI reference previously documented only list, run, cleanup, validate,
complete, version, setup, and chat — missing nearly every workflow
lifecycle command an agent needs to operate a paused, failed, or stuck
run. The interactive-workflows reference assumed these commands existed
without actually documenting them.

Adds full documentation for:
- archon workflow status — show running workflow(s)
- archon workflow approve <run-id> [comment] — resume approval gate
  (also populates $LOOP_USER_INPUT on interactive loops and the gate
  node's output when capture_response: true)
- archon workflow reject <run-id> [reason] — reject gate; cancels or
  triggers on_reject rework depending on node config
- archon workflow cancel <run-id> — terminate running/paused with
  in-flight subprocess kill
- archon workflow abandon <run-id> — mark stuck row cancelled without
  subprocess kill (for orphan-cleanup after server crashes — matches
  the #1216 precedent)
- archon workflow resume <run-id> [message] — force-resume specific
  run (auto-resume is default; this is for explicit override)
- archon workflow cleanup [days] — disk hygiene for old terminal runs
  (with explicit callout that it does NOT transition 'running' rows,
  a common confusion)
- archon workflow event emit — used inside loop prompts for state
  signalling; documented so agents don't invent their own mechanism
- archon continue <branch> [flags] [msg] — iterative-session entry
  point with --workflow and --no-context flags

Also:
- Adds --allow-env-keys flag to the `workflow run` flag table with
  audit-log context and the env-leak-gate remediation use case
- Adds an "Auto-resume without --resume" note disambiguating when
  --resume is needed vs. when auto-resume handles it
- Adds --include-closed flag to `isolation cleanup`, which was
  previously missing; converts the flag list to a structured table
- Explains the cancel/abandon distinction (live subprocess vs. orphan)

All grounded in packages/cli/src/commands/workflow.ts, continue.ts,
and isolation.ts.

* feat(skill/repo-init): add scripts/ and state/, three-path env model, per-project env injection

The repo-init reference was missing two first-class .archon/ directories
(scripts/ since v0.3.3, state/ since the workflow-state feature) and had
nothing to say about env — the #1 thing a user hits on first-run when
their repo has a .env file with API keys.

Directory tree updates:
- Adds .archon/scripts/ with the extension->runtime rule (.ts/.js -> bun,
  .py -> uv) so agents know where to put named scripts referenced by
  script: nodes.
- Adds .archon/state/ with explicit "always gitignore" callout — these
  are runtime artifacts, not source. Previously undocumented in the skill.
- Adds .archon/.env (repo-scoped Archon env) and distinguishes it from
  the target repo's top-level .env.
- Adds a "What each directory is for" list so the structure isn't just
  a tree with no narrative.

.gitignore guidance:
- state/ and .env added as must-gitignore (state/ matches CLAUDE.md and
  reference/archon-directories.md — skill was lagging).
- mcp/ demoted to conditional — gitignore only if you hardcode secrets.

New "Three-Path Env Model" section:
- ~/.archon/.env (trusted, user), <cwd>/.archon/.env (trusted, repo),
  <cwd>/.env (UNTRUSTED, target project — stripped from subprocess env).
- Precedence (override: true across archon-owned paths) and the
  observable [archon] loaded N keys / stripped K keys log lines so
  operators can verify what actually happened.
- Decision tree for where to put API keys vs. target-project env vs.
  things Archon shouldn't touch.
- Links to archon setup --scope home|project with --force for writing
  to the right file with timestamped backups.

New "Per-Project Env Injection" section:
- Documents both managed surfaces: .archon/config.yaml env: block
  (git-committed, $REF expansion) and Web UI Settings → Projects →
  Env Vars (DB-stored, never returned over API).
- Names every execution surface that receives the injected vars:
  Claude/Codex/Pi subprocess, bash: nodes, script: nodes, and direct
  codebase-scoped chat.
- Documents the env-leak gate with all 5 remediation paths so an agent
  hitting "Cannot register: env has sensitive keys" knows the options.

Grounded in CHANGELOG v0.3.7 (three-path env + setup flags), v0.3.0
(env-leak gate), and reference/security.md on the docs site.

* fix(skill/authoring-commands): correct override paths and add home-scoped commands

The file-location and discovery sections described an override layout that
does not match the actual resolver. It showed:

  .archon/commands/defaults/archon-assist.md  # Overrides the bundled

and claimed `.archon/commands/defaults/` was where repo-level overrides
lived. In fact the resolver (executor-shared.ts:152-200 + command-
validation.ts) walks `.archon/commands/` 1 level deep and uses basename
matching — putting `archon-assist.md` at the top of `.archon/commands/`
is the canonical way to override the bundled version. The `defaults/`
subfolder is a Archon-internal convention for shipping bundled defaults,
not a user-facing override pattern.

Also, home-scoped commands (`~/.archon/commands/`, shipped in v0.3.7)
were completely absent — agents authoring personal helpers wouldn't
know they could live at the user level and be shared across every repo.

Changes:
- File Location section now shows all three discovery scopes (repo,
  home, bundled) with precedence ordering and 1-level subfolder rules
- Duplicate-basename rule documented as a user error surface
- Discovery and Priority section rewritten with accurate 3-step lookup
  order — no more references to the nonexistent defaults/ override path
- Adds the Web UI "Global (~/.archon/commands/)" palette label note so
  users authoring helpers for the builder know what to expect

No code changes — this is a pure fix of stale/incorrect skill reference
material.

* feat(skill): add workflow good-practices and troubleshooting reference pages

Closes two gaps from the audit. The skill previously had zero guidance on
designing multi-node workflows (what to avoid, what to reach for first,
how to structure artifact chains) and zero guidance on where to look
when things go wrong (log paths, env-leak gate remediations, orphan-row
cleanup, resume semantics).

New references/good-practices.md (9 Good Practices + 7 Anti-Patterns):

- Use deterministic nodes (bash:/script:) for deterministic work, AI for
  reasoning — the single biggest quality lever
- output_format required whenever downstream when: reads a field — the
  most common source of "workflow silently routes wrong"
- trigger_rule: none_failed_min_one_success after conditional branches —
  the classic bug where all_success fails because a skipped when:-gated
  branch doesn't count as a success
- context: fresh requires artifacts for state passing — commands must
  explicitly "read $ARTIFACTS_DIR/..." when downstream of fresh
- Cheap models (haiku) for glue, strong for substance
- Workflow descriptions as routing affordances
- Validate (archon validate workflows) + smoke-run before shipping
- Artifact-chain-first design
- worktree.enabled: true for code-changing workflows (reversibility)
- Anti-patterns with before/after YAML examples for each (AI-for-tests,
  free-form when: matching, context: fresh without artifacts, long flat
  AI-node layers, secrets in YAML, retry on loop nodes, tiny
  max_iterations, missing workflow-level interactive:, tool-restricted
  MCP nodes)

New references/troubleshooting.md:

- Log location (~/.archon/workspaces/<owner>/<repo>/logs/<run-id>.jsonl)
  with jq recipes for common queries (last assistant message, failed
  events, full stream)
- Artifact location for cross-node handoff debugging
- 9 Common Failure Modes, each with root cause + concrete fix:
  - $BASE_BRANCH unresolvable
  - Env-leak gate (5 remediations)
  - Claude/Codex binary not found (compiled-binary-only)
  - "running" forever (AI working / orphan / idle_timeout)
  - Mid-workflow failure and auto-resume semantics
  - Approval gate missing on web UI (workflow-level interactive:)
  - MCP plugin connection noise (filtered by design)
  - Empty $nodeId.output / field access (4 causes)
- Diagnostic command cheat sheet (list, status, isolation list, validate,
  tail-log, --verbose, LOG_LEVEL=debug)
- Escalation protocol (version + validate + log tail + CHANGELOG + issue)

SKILL.md routing table now dispatches "Workflow good practices /
anti-patterns" and "Troubleshoot a failing / stuck workflow" to the new
references so an agent can find them without having to know they exist.

* docs(book): update node-types coverage from four to all seven

The book is the curated first-contact reading path (landing page → "Get
Started" → /book/). Both dag-workflows.md and quick-reference.md were
stuck on "four node types" — missing script, approval, and cancel. A user
reading the book as their first introduction would form an incomplete
mental model, then find three more node types in the reference section
later with no explanation of when they arrived.

book/dag-workflows.md:
- "four node types" → "seven node types. Exactly one mode field is
  required per node"
- Table now lists Command, Prompt, Bash, Script, Loop, Approval, Cancel
  with one-line "when to use" for each, and cross-links to the dedicated
  guide pages for Script / Loop / Approval
- New sections below the table for Script (inline + named examples with
  runtime and deps), Approval (with the interactive: true workflow-level
  note that's easy to miss), and Cancel (guarded-exit pattern) — keeping
  the existing narrative shape for Bash and Loop

book/quick-reference.md:
- Node Options table now includes script, approval, cancel rows
- agents row added (inline sub-agents, Claude-only)
- New "Script-specific fields" and "Approval-specific fields" subsections
  so the cheat-sheet is actually complete rather than pointing users
  elsewhere for the required constraints
- Retry row callout that loop nodes hard-error on retry — previously
  omitted
- bash timeout note widened to cover script timeout (same semantics)

Both files are docs-web content; the CI build on the docs-script-nodes
PR (#1362) previously validated the Starlight build path with a similar
table addition, so this should render clean.

* fix(skill/cli): remove nonexistent \`archon workflow cancel\`, fix workflow status jq recipe

Two accuracy issues from the PR code-reviewer (comment 4311243858).

C1: \`archon workflow cancel <run-id>\` does NOT exist as a CLI subcommand.
The switch at packages/cli/src/cli.ts:318-485 dispatches on list / run /
status / resume / abandon / approve / reject / cleanup / event — running
\`archon workflow cancel\` hits the default case and exits with "Unknown
workflow subcommand: cancel" (cli.ts:478-484). Active cancellation is
only available via:
  - /workflow cancel <run-id> chat slash command (all platforms)
  - Cancel button on the Web UI dashboard
  - POST /api/workflows/runs/{runId}/cancel REST endpoint

cli-commands.md: removed the \`### archon workflow cancel <run-id>\`
subsection; kept the \`abandon\` subsection but made it explicit that
abandon does NOT kill a subprocess. Added a call-out box at the bottom
of the abandon section explaining where to go for actual cancellation.

troubleshooting.md "running forever" section: split the original
cancel-vs-abandon advice into three bullets — Web UI / CLI abandon (for
orphans, no subprocess kill) / chat \`/workflow cancel\` (for live runs
that need interruption). Added an explicit "there is no archon workflow
cancel CLI subcommand" parenthetical since the wrong command was being
suggested in flow.

I1: the \`archon workflow list --json\` diagnostic used an incorrect jq
filter. workflow list's --json output (workflow.ts:185-219) has shape
{ workflows: [{ name, description, provider?, model?, ... }], errors: [...] }
with no \`runs\` field — \`jq '.workflows[] | select(.runs)'\` returns empty
unconditionally. Replaced with \`archon workflow status --json | jq '.runs[]'\`,
which matches the actual shape of workflowStatusCommand at
workflow.ts:852+ ({ runs: WorkflowRun[] }). Also tightened the narration
to distinguish JSON from human-readable status output.

No change to the commit history in this PR — these are follow-up fixes
to claims I introduced in earlier commits of this branch (f10b989e for
C1, 66d2b86e for I1).

* fix(skill): remove env-leak gate references (feature was removed in provider extraction)

C2 from the PR code-reviewer (comment 4311243858). The pre-spawn env-leak
gate was removed from the codebase during the provider-extraction refactor
— see TODO(#1135) at packages/providers/src/claude/provider.ts:908. Zero
hits for --allow-env-keys / allowEnvKeys / allow_env_keys / allow_target_repo_keys
across packages/. The CLI's parseArgs (cli.ts:182-208) has no
--allow-env-keys option, and because parseArgs uses strict: false, an
unknown --allow-env-keys would be silently ignored rather than error.

What remains accurate and is NOT touched:
- Three-Path Env Model section (user/repo archon-owned envs are loaded;
  target repo <cwd>/.env keys are stripped from process.env at boot)
  still correctly describes current behavior, grounded in
  packages/paths/src/strip-cwd-env.ts + env-integration.test.ts
- Per-Project Env Injection section (Option 1: .archon/config.yaml env:
  block; Option 2: Web UI Settings → Projects → Env Vars) is unchanged —
  both remain the sanctioned way to get env vars into subprocesses

Removed claims (all three files):
- cli-commands.md: --allow-env-keys flag row in the workflow run flags
  table
- repo-init.md: the "Env-leak gate" subsection at the end of Per-Project
  Env Injection listing 5 remediations (all of which reference UI/CLI/
  config surfaces that don't exist). Replaced with a succinct callout
  that explains the actual current behavior — target repo .env keys are
  stripped, workflows that need those values should use managed
  injection — so the reader still gets the "where to put my env vars"
  answer
- troubleshooting.md: the "Cannot register: codebase has sensitive env
  keys" section (error message that can no longer be emitted)

If the env-leak gate is ever resurrected per TODO(#1135), the docs can be
re-added then. The CHANGELOG v0.3.0 entry describing the gate is a
historical record of past behavior and does not need to be rewritten.

* fix(skill/troubleshooting): correct JSONL event type names and field name

C3 from the PR code-reviewer (comment 4311243858). The troubleshooting
reference's event-types table used _started / _completed / _failed
suffixes, but packages/workflows/src/logger.ts:19-30 shows the actual
WorkflowEvent.type enum is:

  workflow_start | workflow_complete | workflow_error |
  assistant | tool | validation |
  node_start | node_complete | node_skipped | node_error

The second jq recipe also queried `.event` but the discriminator is `.type`.

Fixes:
- Event table: renamed columns (_started → _start, _completed → _complete,
  _failed → _error). Explicitly called out the field name as `type` so the
  reader knows what jq selector to use
- Replaced the "tool_use / tool_result" row with a single `tool` row and
  listed its actual payload fields (tool_name, tool_input, duration_ms,
  tokens) — tool_use/tool_result are SDK message kinds that appear within
  the AI stream, not top-level log event types
- Added a `validation` row (was missing; it's emitted by workflow-level
  validation calls with `check` and `result` fields)
- Removed `retry_attempt` row — this event type is not emitted to the
  JSONL file. Retry bookkeeping goes through pino logs, not the workflow
  log file
- Added an explicit callout that loop_iteration_started /
  loop_iteration_completed (and other emitter-only events) go through
  the workflow event emitter + DB workflow_events table, NOT the JSONL
  file. Pointed readers to the DB or Web UI for loop-level detail. This
  distinguishes the two parallel event systems — easy to conflate
  (store.ts:11-17 uses _started/_completed/_failed for the DB side,
  logger.ts uses _start/_complete/_error for JSONL)
- Fixed the "all failed events" jq recipe: .event → .type and _failed → _error
- Minor cleanup: the inline "tool_use events" mention in the "running
  forever" section said the wrong event name — updated to "tool or
  assistant events in the tail"

Grounded in packages/workflows/src/logger.ts (canonical JSONL event
shape) and packages/workflows/src/store.ts (the parallel DB event
naming, which the reviewer correctly flagged as different and worth
keeping distinct).

* fix(skill): two stragglers from the code-reviewer audit

Cleanup of two references that slipped through the earlier C1 and C3 fixes:

- references/troubleshooting.md:126: \`node_failed\` → \`node_error\`
  (the "Node output is empty" diagnostics section references the JSONL
  log, which uses the logger.ts enum — not the DB workflow_events table
  which does use \`node_failed\`). The C3 fix corrected the event table
  and one jq recipe but missed this inline mention.

- references/interactive-workflows.md:106: removed \`archon workflow
  cancel <run-id>\` (nonexistent CLI subcommand) from the
  troubleshooting bullet. This was pre-existing before the hardening
  PR but fell within the C1 remediation scope. Replaced with the
  correct triage: reject (approval gate only) vs abandon (orphan
  cleanup, no subprocess kill) vs chat /workflow cancel (actual
  subprocess termination).

Grounded in the same sources as the earlier C1/C3 commits:
packages/cli/src/cli.ts:318-485 (no cancel case) and
packages/workflows/src/logger.ts:19-30 (JSONL type enum).

* feat(skill): point to archon.diy as the canonical docs source

The skill had no reference to archon.diy (the live docs site built from
packages/docs-web/). Several reference files said "see the docs site"
without naming the URL, leaving the agent to guess or grep the repo for
the hostname. An agent with the skill loaded should know that when the
distilled reference pages don't cover a case, the full canonical docs
are one WebFetch away.

SKILL.md: new "Richer Context: archon.diy" section between Routing and
Running Workflows. Covers:
- When to reach for the live docs (longer examples, tutorial framing,
  features the skill only mentions in passing, "where's that
  documented?" user questions)
- URL map — 13 starting points covering getting-started, book (tutorial
  series), guides/ (authoring + per-node-type + per-node-feature),
  reference/ (variables, CLI, security, architecture, configuration,
  troubleshooting), adapters/, deployment/
- Precedence: skill refs first (context-cheap, tuned for agents), docs
  site as escalation. Prevents agents defaulting to WebFetch when a
  local skill ref already covers the answer

Also upgrades the 5 existing generic "docs site" mentions across
reference files to concrete archon.diy URLs with anchor fragments where
helpful:
- good-practices.md: Inline sub-agents pattern → archon.diy/guides/
  authoring-workflows/#inline-sub-agents
- troubleshooting.md: "Install page on the docs site" → archon.diy/
  getting-started/installation/
- workflow-dag.md: "Workflow Description Best Practices" → anchor link;
  sandbox schema reference → archon.diy/guides/authoring-workflows/
  #claude-sdk-advanced-options
- repo-init.md: Security Model reference → archon.diy/reference/
  security/#target-repo-env-isolation (deep-link into the section that
  covers the <cwd>/.env strip behavior)

URL source of truth: astro.config.mjs:5 (site: 'https://archon.diy').
URL structure mirrors packages/docs-web/src/content/docs/<section>/
<page>.md — verified by the 62 pages the docs build produces.

* chore(workflows): switch default Opus pin to opus[1m] alias (#1395)

Anthropic's Opus 4.7 landed 2026-04-16; on the Anthropic API, opus /
opus[1m] now resolve to 4.7 with a 1M context window at standard
pricing. Using the alias instead of the hard-pinned claude-opus-4-6[1m]
lets bundled default workflows auto-track the recommended Opus version.

No explicit effort is set, so nodes inherit the per-model default
(xhigh on 4.7, high on 4.6).

* fix(workflow): migrate piv-loop plan handoff to $ARTIFACTS_DIR (#1398)

* fix(workflow): migrate piv-loop plan handoff to $ARTIFACTS_DIR (#1380)

The create-plan node used a relative path (.claude/archon/plans/{slug}.plan.md)
that the AI agent would sometimes write to a different location, breaking all
downstream nodes that glob for the plan file. Migrated all plan/progress file
references to $ARTIFACTS_DIR/plan.md and $ARTIFACTS_DIR/progress.txt, matching
the pattern used by archon-fix-github-issue and other workflows.

Changes:
- Replace slug-based plan path with $ARTIFACTS_DIR/plan.md in create-plan node
- Replace ls -t glob discovery with direct $ARTIFACTS_DIR/plan.md reads in
  refine-plan, code-review, and fix-feedback nodes
- Replace empty-string guard with file-existence check in implement-setup bash
- Migrate progress.txt references in implement loop to $ARTIFACTS_DIR/
- Add explicit plan/progress paths in finalize node
- Regenerated bundled-defaults.generated.ts

Fixes #1380

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(workflow): address review findings in archon-piv-loop

- Rename 'Step 2: Write the Plan' to 'Step 2: Plan File Location' to
  eliminate the duplicate heading that collided with Step 3's identical
  title in the create-plan node
- Guard implement-setup against a 0-task plan file: exit 1 with a
  clear error when no '### Task N:' sections are found, preventing a
  silent no-op implement loop
- Remove 2>/dev/null from code-review commit so pre-commit hook failures
  and other stderr are visible to the agent instead of silently swallowed
- Replace '|| true' on git push in finalize with an explicit WARNING echo
  so push failures (auth, upstream conflict, no remote) surface to the
  agent rather than being silently ignored
- Regenerate bundled-defaults.generated.ts

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* chore(workflows): regenerate bundled defaults to match opus[1m] alias

The bundle was stale relative to the YAML sources after #1395 merged —
check:bundled was failing CI. Regenerated; no YAML edits.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* test(workflows): add anyFailed status derivation coverage for DAG executor (#1403)

PIV Task 1: Adds three new tests in a dedicated describe block
'executeDagWorkflow -- final status derivation' covering the anyFailed
branch (dag-executor.ts ~line 2956) that previously had no direct test:
- one success + one independent failure calls failWorkflowRun (not completeWorkflowRun)
- multiple successes + one failure calls failWorkflowRun (not completeWorkflowRun)
- trigger_rule: none_failed skips dependent node but anyFailed still marks run failed

Fixes #1381.

* docs/skill: add parameter-matrix.md quick-lookup reference

New reference for the archon skill: a single-glance lookup of which
parameter works on which node type, an intent-based "how do I..." table,
a consolidated silent-failure catalog, and an inline agents: section
(previously only referenced via archon.diy).

Purpose is complementary, not duplicative:
- workflow-dag.md remains the authoring guide
- dag-advanced.md remains the hooks/MCP/skills/retry deep-dive
- good-practices.md remains the patterns and anti-patterns
- parameter-matrix.md is the grep-this-first lookup when you know the
  outcome you want but not which field gets you there

Also registers the new reference in SKILL.md routing table.

* docs: point contributors at PR template and Closes #N convention

Add explicit references to .github/PULL_REQUEST_TEMPLATE.md in both
CONTRIBUTING.md and CLAUDE.md, plus a reminder to link issues with
Closes/Fixes/Resolves so they auto-close on merge. Repo-triage runs
were flagging dozens of partially-filled or unlinked PRs each cycle.

* feat(workflows): add maintainer-standup workflow for daily PR/issue triage (#1428)

* feat(workflows): add maintainer-standup workflow for daily PR/issue triage

Daily morning briefing that pulls origin/dev, triages all open PRs and assigned
issues against direction.md, and surfaces progress vs. the previous run. Designed
for live-checkout use (worktree.enabled: false) so it can read its own state.

Layout under .archon/maintainer-standup/:
  - direction.md (committed) — project north-star: what Archon IS / IS NOT.
    Drives PR P4 polite-decline classification with cited clauses.
  - README.md / profile.md.example — setup docs and template for new maintainers.
  - profile.md, state.json, briefs/YYYY-MM-DD.md — gitignored, per-maintainer.

Engine:
  - 3 parallel gather scripts in .archon/scripts/maintainer-standup-*.ts
    (git-status, gh-data, read-context) — bun runtime, JSON stdout.
  - Synthesis node: command file with output_format schema for
    { brief_markdown, next_state }.
  - Persist node: tiny inline bun script writes both to disk.

Run-to-run continuity: state.json carries observed_prs/issues snapshots, so the
next run can detect what merged, what closed, what the maintainer shipped, and
which carry-over items aged past N days.

Also adds .archon/** to the ESLint global ignore list (matches the existing
.claude/skills/** pattern) since .archon/ is user content and not part of any
tsconfig project.

* fix(maintainer-standup): address CodeRabbit review on #1428

- gh-data: bump --limit 100 → 1000 on all_open_prs and warn loudly when
  the cap is hit; preserves the observed_prs invariant the next-run
  "resolved since last run" diff depends on. (CodeRabbit critical)
- maintainer-standup.md: clarify P1 CI signal — the gathered payload only
  carries mergeStateStatus, not statusCheckRollup; for borderline P1s,
  drill in via `gh pr checks <n>`. (CodeRabbit minor)
- workflow.yaml persist: write briefs under local YYYY-MM-DD (sv-SE
  locale) instead of UTC ISO date, so an evening run doesn't file
  tomorrow's brief and break recent_briefs lookups. (CodeRabbit minor)
- workflow.yaml persist: wrap state/brief writes in try/catch; on
  failure dump brief_markdown and next_state to stderr so a 5-minute
  Sonnet synthesis isn't lost to a transient disk error. (CodeRabbit minor)
- gh-data + git-status: switch from execSync (shell-string) to
  execFileSync (argv array) for git/gh invocations. Defense-in-depth
  against shell metacharacters in values that pass through (esp. the
  gh_handle from profile.md). (CodeRabbit nitpick)

* feat(workflows): support explicit tags in workflow YAML (#1190)

Add optional `tags: string[]` to `workflowBaseSchema`. Explicit values take precedence over keyword inference; `tags: []` suppresses inference end-to-end; omitting the field falls back to inference (backwards compatible). Non-array values warn-and-ignore matching the sibling `worktree`/`additionalDirectories` patterns.

* feat(workflows): add maintainer-review-pr and group maintainer workflows under maintainer/ (#1430)

* feat(workflows): add maintainer-review-pr and group maintainer workflows under .archon/workflows/maintainer/

Adds the maintainer-review-pr workflow — a Pi/Minimax-based PR triage
flow that gates on direction alignment, scope focus, and PR-template
quality before doing any deep review. If the gate clears, runs the
five review aspects (code/error-handling/test-coverage/comment-quality/
docs-impact) as parallel Archon nodes and auto-posts a synthesized
review comment. If the gate fails (direction conflict, multiple
concerns, sprawling scope), drafts a polite-decline comment and pauses
for the maintainer's approval before posting.

Reorganizes the existing maintainer-standup workflow into the same
subfolder so all maintainer-facing workflows live together. Subfolder
grouping is supported by the workflow loader (1 level deep, resolution
by filename).

What lands:

- .archon/workflows/maintainer/maintainer-standup.yaml (moved from
  .archon/workflows/maintainer-standup.yaml)
- .archon/workflows/maintainer/maintainer-review-pr.yaml (new)
- .archon/commands/maintainer-review-{gate,code-review,error-handling,
  test-coverage,comment-quality,docs-impact,synthesize,report}.md (new,
  Pi-tuned variants of the existing review-agent commands so they avoid
  Claude-only Task / sub-agent patterns)

Pi/Minimax integration:

- Uses provider: pi, model: minimax/MiniMax-M2.7 — verified via the
  e2e-minimax-smoke test that Pi correctly routes to Minimax (session
  jsonl confirms provider=minimax) and that Pi's best-effort
  output_format parser handles the gate's nested schema.
- Two test runs landed real comments: a direction-decline on PR #1335
  and a deep-review on PR #1369. Both were posted to GitHub via the
  workflow's gh pr comment node.

* chore(workflows): also group repo-triage under .archon/workflows/maintainer/

repo-triage is the third maintainer-facing workflow alongside maintainer-standup and maintainer-review-pr; group it in the same subfolder for consistency. Subfolder resolution is by filename so the workflow name is unchanged.

* feat(pi): use ModelRegistry to support custom models and skip auth for unmapped providers (#1284)

Closes #1096.

- Switch Pi provider model lookup from pi-ai's getModel() (static catalog
  only) to ModelRegistry.create(authStorage).find() so user-configured
  custom models in ~/.pi/agent/models.json (LM Studio, ollama, llamacpp,
  custom OpenAI-compatible endpoints) are discoverable.
- Remove the local lookupPiModel helper.
- For env-var-mapped providers (anthropic, openai, etc.) still throw
  with a pi /login hint when credentials are missing. For unmapped
  providers, log pi.auth_missing at info and continue so local models
  that don't need credentials work without ceremony.
- Surface modelRegistry.getError() in the not-found message and emit
  pi.model_not_found so users debugging custom-provider configs see the
  real cause (e.g. missing baseUrl in models.json).
- Guard AuthStorage.create() and ModelRegistry.create() with try/catch
  so a malformed ~/.pi/agent/auth.json surfaces with Pi-framed context
  instead of a raw SDK stack trace.
- Document the credential-free path for local providers in ai-assistants.md.

Co-authored-by: Matt Chapman <Matt@NinjitsuWeb.com>

* chore(workflows): group smoke-test workflows under test-workflows/ + add e2e-minimax-smoke (#1431)

* chore(workflows): group all smoke-test workflows under .archon/workflows/test-workflows/

Move the 7 existing e2e-*.yaml smoke tests plus the new e2e-minimax-smoke
test into a dedicated subfolder. Subfolder grouping is supported by the
workflow loader (1 level deep, resolution by filename) so workflow names
are unchanged. Mirrors the .archon/workflows/maintainer/ split landing
in #1430.

Also adds e2e-minimax-smoke.yaml — a sanity check that Pi correctly
routes to Minimax M2.7 via the user's local pi auth, and that Pi's
best-effort output_format parser handles a small nested schema. Asserts
routing by reading the most recent Pi session jsonl rather than asking
the model to self-identify (LLMs are unreliable narrators about their
own identity, especially when Pi's system prompt mentions other
providers as defaults).

* fix(e2e-minimax-smoke): address CodeRabbit review on #1431

- Widen find window from -mmin -3 to -mmin -10. The smoke's three Pi
  nodes plus the assert can collectively run several minutes on slow
  networks; 3 minutes was tight enough to false-FAIL on a healthy run.
  (CodeRabbit minor)
- Drop non-deterministic `head -1` over `find` output. find doesn't
  guarantee any order; on a tie, the wrong file would be picked. Now
  iterates all matching sessions and breaks on first one carrying the
  routing signal — any match is sufficient evidence. (CodeRabbit minor)
- Replace single-regex `'"provider":"minimax".*"modelId":"MiniMax-M2.7"'`
  with two separate greps joined by `&&`. JSON field order isn't part of
  Pi's contract; a future Pi release reordering `provider` and `modelId`
  in the model_change event would silently false-FAIL the original
  pattern. The new check is order-independent. (CodeRabbit major)

* fix(maintainer-review): address CodeRabbit findings on #1430 (#1432)

Six findings, two majors and four minors/nitpicks:

- gate.md L17 vs L77: resolved conflicting input-source instructions.
  Body claimed "all inline, no extra fetch" while a later phase
  permitted reading PULL_REQUEST_TEMPLATE.md. Now: explicit "one
  allowed extra read" callout in Phase 1 + matching wording in Gate C.
  (CodeRabbit major)

- gate.md fenced blocks: added missing language identifiers (text/json/
  markdown) to satisfy markdownlint MD040. (CodeRabbit minor)

- gate.md L155 + read-context.ts: deterministic clock. The 3-day deadline
  was anchored to prior_state.last_run_at, which can be stale and produce
  past-dated deadlines. Moved both today and deadline_3d into the
  read-context.ts output (computed via sv-SE locale → ISO date in local
  time) and instructed the gate to use $read-context.output.deadline_3d
  directly. LLMs are unreliable at calendar arithmetic; this avoids it
  entirely. (CodeRabbit major)

- maintainer-review-pr.yaml fetch-diff: dropped 2>/dev/null on gh pr diff
  so auth / network / deleted-PR failures fail the node instead of
  feeding an empty diff to the gate. Empty-but-successful diff (PR has
  no changes) is now an explicit marker the gate can detect. (CodeRabbit
  minor)

- maintainer-review-pr.yaml approve-unclear: added capture_response: true
  so the maintainer's approve comment flows to the report node. Reject
  reasoning is already captured by Archon's run record. (CodeRabbit
  minor)

- maintainer-review-pr.yaml post-decline + report.md: the gh pr edit
  --add-label call previously swallowed all errors with || true and the
  report still claimed the label was applied. Now writes applied/skipped
  to $ARTIFACTS_DIR/.label-applied + the gh stderr to .label-error so
  the report can describe the actual outcome. (CodeRabbit nitpick)

* fix(workflows): approval gate bypass after reject-with-redraft on resume (#1435)

* fix(workflows): approval gate bypass after reject-with-redraft on resume

When an approval node was rejected with on_reject.prompt, the synthetic
PromptNode built to run the on_reject prompt reused the approval gate's
own node ID. executeNodeInternal then wrote a node_completed event with
that ID, causing getCompletedDagNodeOutputs to treat the gate as already
completed on the next resume — bypassing the human gate entirely.

Fix: give the synthetic node the ID `${node.id}:on_reject` so its
node_completed event has a distinct step_name that won't match the
approval gate slot in priorCompletedNodes.

Adds a regression test asserting no node_completed event with the
approval gate's ID is written during on_reject execution.

Fixes #1429

* test(workflows): add positive assertion and SSE side-effect comment for on_reject synthetic node

Add complementary positive assertion to the regression test to verify that
node_completed is written exactly once with step_name 'review:on_reject',
ensuring future refactors that suppress the event entirely would be caught.

Add inline comment in executeApprovalNode documenting the known SSE side-effect:
node_started/node_completed events with nodeId='review:on_reject' flow through
the SSE pipeline into the web UI, resulting in a transient phantom node in the
execution view. This is cosmetic-only — the human gate contract is preserved.

* simplify: reduce duplicate cast pattern in on_reject test assertions

* feat(workflows): add mutates_checkout to allow concurrent runs on live checkout (#1438)

* feat(workflows): add mutates_checkout field to skip path-lock for concurrent runs

Add `mutates_checkout: boolean` (optional, default true) to the workflow
schema. When set to false, the executor skips the path-exclusive lock
that serializes all runs on the same working path, allowing N concurrent
runs on the same live checkout.

The primary use case is `maintainer-review-pr`, which reads shared state
but writes only to per-run artifact paths and GitHub PR comments — two
parallel reviews of different PRs should not fail with "Workflow already
active on this path".

Changes:
- `schemas/workflow.ts`: add optional `mutates_checkout` field
- `loader.ts`: parse and propagate the field (warn-and-ignore on invalid values)
- `executor.ts`: wrap path-lock guard in `if (workflow.mutates_checkout !== false)`
- `executor.test.ts`: two new tests in the concurrent-run guard suite
- `maintainer-review-pr.yaml`: opt in with `mutates_checkout: false`

* test(workflows): add loader tests for mutates_checkout parsing

- Add 5 tests covering false, true, omitted, and invalid (string "yes") values
- Invalid non-boolean values are silently dropped with warn — now explicitly tested
- Remove the // end mutates_checkout guard trailing comment (no precedent in file)
- Clarify loader comment: "parse/warn pattern" not "warn-and-ignore pattern" to avoid implying the return style matches interactive

* simplify: collapse nodeType/aiFields pair into single nonAiNode object in parseDagNode

* docs: replace String.raw with direct assignment in script node examples (#1434)

* docs: replace String.raw with direct assignment in script node examples

String.raw`$nodeId.output` fails silently when substituted output contains
a backtick, terminating the template literal early and producing cryptic parse
errors. JSON is valid JS expression syntax, so direct assignment is safe for
all valid JSON values including those with backticks.

- Replace String.raw pattern in dag-workflow.yaml example
- Replace String.raw pattern in archon-workflow-builder.yaml template
- Add CAUTION bullet in workflow-dag.md Script Node section
- Add Silent Failures item #14 in parameter-matrix.md
- Add Starlight caution aside in script-nodes.md
- Extend script bodies bullet in variables.md
- Regenerate bundled-defaults.generated.ts

Fixes #1427

* docs: fix Rule 6 in generate-yaml prompt to distinguish bun vs uv patterns

Rule 6 still referenced JSON.parse after the example was updated to direct
assignment, creating a contradiction for the AI code generator. Update the
prose to explicitly distinguish TypeScript/bun (direct assignment) from
Python/uv (json.loads), matching the updated embedded example.

* chore(workflows): group experimental workflows under .archon/workflows/experimental/

Move two repo-scoped workflows that were sitting untracked at the workflow
root into a dedicated subfolder. Subfolder grouping is supported by the
loader (1 level deep, resolution by filename), so workflow names are
unchanged and the /release skill still resolves archon-release correctly.

Files moved:
- archon-fix-github-issue-experimental.yaml — Path-A variant of the
  issue-fix workflow used today to land #1434, #1435, #1438.
- archon-release.yaml — the live release workflow used by the /release
  skill end-to-end (validate -> binary smoke -> version bump -> changelog
  -> approval -> commit -> PR -> tag -> Homebrew formula update).

* fix(workflows): export ARTIFACTS_DIR, LOG_DIR, BASE_BRANCH to bash nodes (#1387)

executeBashNode previously only merged explicit envVars on top of
process.env. The three well-known workflow directories (artifactsDir,
logDir, baseBranch) were passed as function parameters and used for
compile-time substitution of $ARTIFACTS_DIR / $LOG_DIR / $BASE_BRANCH
in the script body, but were never added to the subprocess environment.

As a result, any script that relied on shell-runtime expansion — e.g.
JSON_FILE="${ARTIFACTS_DIR}/foo.output.json" inside a heredoc, an
inherited helper script, or a `bash -c` subshell — saw the variable
unset and silently fell back to its default (typically an empty string
or "."), writing artifacts to the workflow cwd instead of the nominal
artifacts directory.

Always build subprocessEnv from process.env plus the three well-known
directories, then allow explicit envVars to override. Compile-time
substitution behavior is unchanged; existing scripts that do not
reference these variables are unaffected; user-supplied envVars still
win on conflict.

* fix(workflow): substitute $nodeId.output refs in approval messages (#1426)

* fix(workflow): substitute \$nodeId.output refs in approval messages

Approval node messages were emitted as raw strings, bypassing the
substituteNodeOutputRefs() pass that prompt/bash/loop/cancel nodes
all run. This made interactive workflows like atlas-onboard show
literal "\$gather-context.output.repo_name" placeholders to humans
at HITL gates, leaving them unable to know what they were approving.

Fix: rendered the approval.message through substituteNodeOutputRefs
once at the top of the standard approval gate path, then used the
resolved string in all 4 emission sites (safeSendMessage,
createWorkflowEvent, pauseWorkflowRun, event-emitter).

Test: new dag-executor.test case wires a structured-output upstream
node into an approval node and asserts pauseWorkflowRun receives the
substituted message ("Repo: hcr-els | App: CCELS | Port: 3012")
rather than the literal placeholders.

Repro: any workflow with an approval node whose message references
\$nodeId.output[.field]. Observed in the wild on atlas-onboard's
confirm-context HITL gate.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(workflow): extend approval-substitution test to cover all 4 emission sites

Per CodeRabbit review: the original test only verified pauseWorkflowRun
received the substituted message, but the fix touches 4 emission sites.
A future regression at safeSendMessage / createWorkflowEvent / event-emitter
would silently leave the test passing while users still saw raw $node.output
placeholders.

Adds two additional assertions:
- platform.sendMessage prompt contains substituted message + does NOT
  contain literal $gather-context.output placeholders
- The persisted approval_requested workflow event's data.message is
  substituted

Event-emitter assertion deferred (no existing pattern for spying on the
global emitter in this test file). Two of three secondary surfaces
covered closes the practical regression risk — both are user-visible
(chat prompt + audit-log event); the emitter is internal only.

Test count: 7 pass / 22 expect() (was 18). Full suite 193 pass / 353
expect() — no regressions.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(workflows): expose $LOOP_PREV_OUTPUT in loop node prompts (#1286) (#1367)

* feat(workflows): expose $LOOP_PREV_OUTPUT in loop node prompts (#1286)

Adds a new substitution variable that carries the previous loop iteration's
cleaned output into the next iteration's prompt. Empty on iteration 1; the
prior iteration's output (after stripCompletionTags) on iteration 2+.

Why: fresh_context: true loops have no way to reference what the previous
pass produced or why it failed without dragging the full session forward.
$LOOP_PREV_OUTPUT closes that gap with zero session-cost — same trust
boundary as $nodeId.output, no new external surface.

Changes:
- packages/workflows/src/executor-shared.ts: substituteWorkflowVariables
  accepts a 10th positional loopPrevOutput arg and substitutes
  $LOOP_PREV_OUTPUT (defaults to '').
- packages/workflows/src/dag-executor.ts: executeLoopNode passes
  lastIterationOutput on iteration 2+ (and explicit '' on iteration 1 /
  the first iteration of an interactive resume, since lastIterationOutput
  is a per-call variable that does not survive resume metadata).
- Unit tests: 3 new cases in executor-shared.test.ts.
- Integration tests: 2 new cases in dag-executor.test.ts verifying the
  prompt sent to the AI on iter 1 vs iter 2, and that the value reflects
  cleaned output (no <promise> tags).
- Docs: variables.md, loop-nodes.md (new "Retry-on-failure" pattern),
  CLAUDE.md variable reference.

Backward compatibility: prompts that don't reference $LOOP_PREV_OUTPUT are
unaffected. All 843 workflow tests + type-check + lint + format:check +
bun run validate pass locally.

* docs: address coderabbit review on variables/loop-nodes

- variables.md: include $LOOP_PREV_OUTPUT in substitution-order list and
  availability table to match the new variable row at line 30
- loop-nodes.md: document the interactive-resume exception where the first
  iteration after an approval-gate resume still receives an empty
  $LOOP_PREV_OUTPUT regardless of iteration number (per dag-executor.ts
  L1781-1783 where i === startIteration always clears prev output)

* docs(changelog): add Unreleased entry for $LOOP_PREV_OUTPUT (#1367 review)

* test(loop): add resume-from-approval integration test for $LOOP_PREV_OUTPUT (#1367 review)

Per maintainer-review-pr suggestion (Wirasm): two-call integration test
covering the resume-from-approval scenario.

  - Call 1: fresh interactive loop pauses at the gate after iteration 1 and
    asserts $LOOP_PREV_OUTPUT substitutes to empty on iter 1 (no prior
    output) plus the gate pause is recorded.
  - Call 2: resumed run with metadata.approval populated. The first
    resumed iteration must substitute $LOOP_PREV_OUTPUT to '', NOT to the
    paused run's iter-1 output (which lived in a different process and is
    not persisted). $LOOP_USER_INPUT still flows through as normal.

Locks the documented invariant at dag-executor.ts:1769-1772.

---------

Co-authored-by: voidborne-d <DottyEstradalco@allergist.com>

* feat(maintainer-standup): surface contributor replies since last run (#1457)

The brief was missing a key signal — when contributors reply on PRs or
issues, the maintainer wouldn't see it explicitly. Empirically reviewed
PR replies were buried under aggregate updatedAt timestamps with no
indication of WHO replied or WHAT they said.

This adds a new "Replies waiting on you" section to the daily brief,
sourced from two paginated GitHub API calls scoped by since=last_run_at:

  - /repos/{o}/{r}/issues/comments  PR + issue conversation comments
  - /repos/{o}/{r}/pulls/comments   inline code-review comments

Filters applied:
  - Skip the maintainer's own comments (gh_handle from profile.md)
  - Skip GitHub bot accounts (login ending in [bot]) — coderabbitai,
    chatgpt-codex-connector, dependabot, etc. They post a constant
    churn of automated review tooling that drowns out human replies;
    the maintainer wants the latter.

Output is grouped by PR/issue number with kind classification:
  - issue              comment on a non-PR issue
  - pr_conversation    PR conversation-level comment
  - pr_review          inline code-review comment (most actionable —
                       usually needs a code-level response, so kind
                       upgrades to pr_review whenever review comments
                       arrive on a PR that also has conversation ones)

Sorted by recency (newest reply first). Synthesizer reads
gh-data.output.replies_since_last_run and renders a section.

Verified on a backdated state.json (last_run_at = yesterday morning):
22 human replies on 22 PRs/issues, bot noise filtered (32 → 22 after
the [bot] filter). Surfaces…
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

How to setup ollama and config a model and base url

5 participants